chash
stringlengths
16
16
content
stringlengths
267
674k
f46f098a23d11646
Research ArticlePHYSICS Attosecond timing of electron emission from a molecular shape resonance See allHide authors and affiliations Science Advances  31 Jul 2020: Vol. 6, no. 31, eaba7762 DOI: 10.1126/sciadv.aba7762 Shape resonances, due to trapping of particles in potential barriers, are ubiquitous in nature. First discovered by Fermi et al. when studying slow-neutron capture in artificial radioactivity (1, 2), they have since been the focus of countless investigations in physics, chemistry, and biology. They play a crucial role in α-decay of radioactive nuclei (3), molecular fragmentation (4), rotational predissociation (5), electron detachment (6), ultracold collisions (7), low-energy electron scattering (8, 9), and photoionization (10, 11), to name a few. They are also thought to be at the origin of enhanced radiation damage of DNA and other biomolecules (12) and to play an important role in the stability of Bose-Einstein condensates (13). Shape resonances are usually associated to specific spectral features. For instance, they lead to broad peaks in the photoionization spectrum of atoms and molecules (1417) and to a strong variation in the corresponding photoelectron angular distribution as a function of kinetic energy (18). The energies at which these resonances are expected to appear and the corresponding trapping times (or conversely, decay lifetimes) are entirely determined by the shape of the barrier seen by the trapped (or ejected) particle, hence the name “shape resonances.” Thus, the analysis of the resonance peaks observed in experimental spectra can be used to infer the actual height and width of the potential barrier seen by the impinging or emitted particle. This can be unambiguously done in atomic systems. However, in molecules, the situation is more complicated, as nuclear motion may cause the potential felt by the electrons to change during a vibrational period, thus affecting the shape of the barrier. This was theoretically predicted back in 1979 by Dehmer, who argued that the anomalous variations of vibrationally resolved photoelectron angular distributions of N2 with photon energy could be the consequence of changes in the shape of the potential barrier with the internuclear distance (10). This prediction has been the subject of debate for years (11). So far, direct time-resolved measurements of the dynamical evolution of shape resonances as the molecule vibrates have not been possible due to the lack of temporal resolution. In this work, we investigate with attosecond time resolution the changes induced by the vibrational motion on the potential barrier that sustains the 3σg1 shape resonance in N2. We do so by ionizing the molecule with extreme ultraviolet (XUV) radiation consisting of high-order harmonics spanning an energy range between 20 and 40 eV (see Fig. 1A). Temporal information is obtained by using the RABBIT (reconstruction of attosecond beating by interference of two-photon transitions) interferometric method, described below (19). In particular, we determine the (relative) photoionization delay, which is the time the electron takes to escape from the molecular potential and is thus sensitive to the presence of a shape resonance. With the support of theoretical calculations that explicitly describe the photoionization process and take into account the vibrational motion, we show that changes in the molecular bond length of only 0.02 Å can lead to variations of the photoionization delay as large as 200 as and are not uniform over the investigated energy range. These measurements with attosecond time resolution show that molecular photoionization in the vicinity of shape resonances cannot be described in terms of the commonly accepted Franck-Condon picture, in which electronic excitation is assumed to be instantaneous and decoupled from the nuclear motion. Fig. 1 Photoionization scheme. (A) A comb of high-order harmonics (HH), spanning over the photon energy range of 20 to 40 eV, probes the entire shape resonance region in N2+. The blue dashed line corresponds to experimental photoionization cross sections [taken from (30)] across the resonance. The scheme for generating sideband 18 (SB18) is denoted by the vertical arrows. The violet (red) arrows denote XUV (NIR) photons. (B) Potential energy curves of the ground state of N2:Σg+1 (black), and lowest three states of N2+:X2Σg+ (blue), A2Πu+ (red), and B2Σu+ (green). The shaded area (1.06 to 1.14 Å) denotes the Franck-Condon region. The horizontal lines show the position of different vibrational levels associated with the corresponding electronic state. The RABBIT technique has been widely used to determine accurate photoionization delays in atoms (20, 21) and, following the pioneering work of Haessler et al. (22), has started to be applied to molecular systems as well (2325). Here, N2 is ionized by a comb of odd high-order harmonics, covering the 3σg1 shape resonance (see Fig. 1A). Electrons are mainly removed from the 3σg and 1πu orbitals of the N2 molecule in its Σg+1 ground state, leading to N2+ ions in the X2Σg+ and A2Πu+ states (X and A states for short), respectively (see Fig. 1B). In our experiment (see Materials and Methods for details), a near-infrared (NIR) 45-fs laser pulse generated high-order harmonics in argon, corresponding to a train of attosecond pulses in the time domain (26). The harmonics and a weak replica of the NIR pulse (probe) were focused into an effusive gas jet containing N2 molecules. The ejected electrons were detected by a magnetic bottle electron spectrometer with a resolution up to EE ∼ 80. To use the best possible resolution and hence resolve the vibrational levels in the first two outer valence states of the N2+ ion, we retarded photoelectrons by suitable voltages before entering into the spectrometer flight tube. The measurements consisted in recording, for alternating shots, the XUV-only and XUV + NIR photoelectron spectra (PES) as a function of the delay (τ) between the XUV and NIR pulses. We perform simulations to explicitly obtain the vibrationally resolved PES resulting from the interaction of an isolated N2 molecule with an XUV attosecond pulse train and a time-delayed NIR pulse, so that the extracted time delays can be directly compared with experiment. The time-dependent Schrödinger equation is numerically solved including the bound-bound, bound-continuum, and continuum-continuum dipole transition matrix elements between the electronic states. These are computed using the static exchange density functional theory method described in (27). The nuclear motion is taken into account within the Born-Oppenheimer approximation (see the “Theoretical Methods” section in the Supplementary Materials for a detailed explanation), and the laser parameters are chosen to reproduce the experimental conditions. The measured and calculated PES, obtained with XUV and NIR, exhibit sidebands originating from the interference between two quantum paths: the absorption of a harmonic and an NIR photon and the absorption of the next harmonic and the stimulated emission of an NIR photon (see Fig. 1A). Consequently, the amplitude of the sidebands, ASB, oscillates as a function of τ according to the formula (19)ASB=A+B cos[2ω0(ττXUVτmol)](1)where A and B are two constants, ω0 is the angular frequency of the driving NIR field, τXUV denotes the group delay of the attosecond pulses (28), and τmol is the molecular two-photon ionization time delay (see the Supplementary Materials) (29). The combination of both high spectral and temporal resolution achieved in our experiment allows us to distinguish photoelectrons leaving the residual molecular cation in different vibrational states and to determine the variation in photoionization delays due to changes in the molecular geometry. Figure 2 (A and B) shows the experimental and theoretical data, respectively, corresponding to the difference between the XUV + NIR and XUV-only PES (in color), which oscillates with frequency 2ω0 as a function of τ. Both theory and experiment are in excellent agreement (for details about the theoretical method, see the Supplementary Materials). Figure 2C presents the XUV-only (violet) and XUV + NIR (black) PES obtained by integrating over all delays. Good agreement with the calculated spectra shown in Fig. 2D can be noticed. The difference between theory and experiment in the relative intensities of some of the photoelectron peaks is due to the different position of the shape resonance predicted by theory (see Fig. 3B). In addition, the harmonic comb used in the theoretical calculations was slightly different from the experimental one. Figure 2E presents individual contributions from the X (blue) and A (red) states in the theoretical XUV + NIR PES, which allows us to assign the different features of the experimental spectra. The structures between 14.5 and 16 eV, for example, are due to ionization to the A state by absorption of the 21st harmonic leaving the N2+ ion in the v′ = 0 − 6 vibrational states and to a two-photon transition (sideband) to the X state with vibrational states v′ = 0,1 (blue shaded region). The small peaks near 14 eV are due to two-photon transitions to the A state with vibrational states v′ = 0,1 (red shaded region). Plotting the difference between the XUV + NIR and XUV-only PES allows us to confirm our assignment, since, as shown in Fig. 2A, we can distinguish between sideband (positive) and harmonic (negative) peaks. Fig. 2 Photoelectron spectra. Difference between PES obtained with XUV + NIR and XUV only, as a function of delay. Experiment (A) and theory (B). Experimental (C) and theoretical (D) PES for XUV-only (violet) and XUV + NIR (black) photoionization, averaged over all relative delays. (E) Theoretical PES for XUV + NIR photoionization to the X (blue) and A (red) electronic states. By comparing (A) and (E), we can assign the spectral features to different vibrational levels (shaded areas) of the X and A electronic states, as indicated by the vertical blue and red dashed lines, respectively. Fig. 3 Photoionization time delays. (A) Differences in molecular time delays (τmol) between X and A states for v′ = 0. Red circles, experiment; black circles, theory. (B) Partial photoionization cross sections for the X state; open squares, synchrotron-based experiment (30, 31); solid line, theory (this work). The position of the resonance maxima is shifted by almost 6 eV (denoted as ↔) between theory and experiment. This shift is also observed in the relative time delay (A and C). (C) Relative time delay between the vibrational levels v′ = 1 and 0 for the X state. The strong photon energy dependence observed here vanishes completely if one neglects the nuclear motion (see fig. S4). (D) Same as (C), but for the A state. To determine molecular two-photon ionization time delays τmol, we fitted the measured (Fig. 2A) sideband oscillations to Eq. 1. The same procedure is applied to extract the theoretical ones from the RABBIT spectra computed with a time-dependent numerical approach. Figure 3 (A, C, and D) shows experimental (red circles) and theoretical (black circles) relative time delays for different final states. Since the contribution from the ionizing radiation (τXUV) is the same for all the final states of N2+, the plotted differences correspond to pure molecular contributions. The relative molecular time delay for leaving N2+ in the X(v′ = 0) state with respect to leaving it in the A(v′ = 0) state is shown in Fig. 3A. For both theory and experiment, this relative delay varies by more than 40 as across the shape resonance region. The theoretical results, however, are shifted to lower energies with respect to the experimental ones by almost 6 eV, indicated by the green arrow in Fig. 3A, which is similar to the shift of the maximum of the calculated photoionization cross section in comparison with that obtained from synchrotron radiation measurements (see Fig. 3B) (30, 31). This is due to an incorrect description of the resonance position by our theoretical method, which is necessarily simpler than state-of-the-art electronic structure methods for the equilibrium geometry (32), as we must describe the molecular electronic continuum states in a wide range of internuclear distances, as well as all the continuum-continuum transitions induced by the NIR probe pulse (see the Supplementary Materials). Figure 3 (C and D) shows the relative molecular time delays, τX(v′ = 1) − τX(v′ = 0) and τA(v′ = 1) − τA(v′ = 0), for the X and A electronic states, respectively. For the A state, the relative delay is very small and practically independent of photon energy, while for the X state, it varies markedly across the shape resonance. Once again, for the reasons described above, the theoretical curve is shifted down in energy with respect to the experimental one by ∼6 eV. The variation of the molecular time delay differences between the X and A states observed in Fig. 3A is therefore mainly due to the variation of the time delay for the former, which can be attributed to the presence of the shape resonance. The time delay varies with energy because the time spent by the photoelectron in the metastable state before being ejected into the continuum also varies with energy. Since, for the A state, the electron does not have to go through any potential barrier, the corresponding time delay is much smaller than for the X state. We now analyze the physical meaning of the results presented in Fig. 3 (C and D). In atomic systems, photoionization time delays obtained from RABBIT measurements can often be written as the sum of two contributions, τ1 + τcc. The first term is related to one-photon ionization by the XUV field. For a single or dominant ionization channel containing no sharp structures in the continuum (e.g., narrow Fano resonances), τ1 is given by the derivative of the scattering phase in that particular channel, the so-called Wigner delay (33, 34). The second term, τcc, is the additional time delay due to the continuum-continuum transitions induced by the NIR field (35, 36). In the vicinity of the shape resonance, one-photon ionization leading to N2+ in the X state is dominated by the f-wave (𝓁 = 3). Similarly, for the A state, the d-wave (𝓁 = 2) dominates over all other partial waves in the same photon energy region (see fig.S2). Although the molecular two-photon ionization time delay τmol cannot be strictly decomposed as τ1 + τcc, due to averaging over molecular orientation and electron emission angle, the variation of τmol in Fig. 3 (C and D) still reflects the ionization dynamics arising from the main channels (29). Figure 4 shows the modulus and phase of the dominant terms contributing to the dipole transition element as a function of electron kinetic energy ε and internuclear distance R for both electronic states (see the Supplementary Materials for notations). For the X state, at a given ε, both the modulus (Fig. 4A) and phase (Fig. 4B) of the dipole transition element strongly vary with R within the Franck-Condon region, in contrast to the A state (Fig. 4, C and D). This implies that electronic transitions cannot be considered instantaneous relative to nuclear motion as assumed by the widely used Franck-Condon picture. Consequently, the molecular photoionization delays, obtained by taking the derivative of the phase of the dipole transition element, strongly depend on R. The difference in molecular time delays between the v′ = 1 and 0 vibrational levels thus provides direct information on non–Franck-Condon ionization dynamics, i.e., on how the nuclear motion affects the photoionization process. Fig. 4 One-photon dipole transition matrix elements. Modulus (A and C) and phase (B and D) of the dominant terms dσu,l=3,zX (A and B) and dδg,l=2,xA (C and D) contributing to the one-photon transition matrix element of the X and A states, respectively, as a function of the internuclear distance (R) and electron kinetic energy (ε). The area between the two dashed lines denotes the Franck-Condon region. Figure 5A shows the absolute square of the product between the initial vibrational wave function, the transition matrix element for the X state (see Fig. 4A) at an electron kinetic energy of 8.2 eV, and the final vibrational wave function (see the Supplementary Materials for details). The initial and final vibrational wave functions correspond, respectively, to the v = 0 level of N2 in the ground electronic state and the v′ = 0 (black) and v′ = 1 (red) levels of N2+ in the X electronic state. These curves have well-defined maxima at R = 1.113 and 1.09 Å, respectively, showing that the transition to the shape resonance occurs, on average, at smaller internuclear distances for v′ = 1 than for v′ = 0. This small difference in bond length of ∼0.02 Å has a notable impact on the electron dynamics. As illustrated in Fig. 5B, the potential felt by the emitted photoelectron at these two internuclear distances is different, leading to a higher resonance energy for R = 1.09 Å (v′ = 1) than for R = 1.113 Å (v′ = 0), as seen by comparing the red and black dashed lines. In addition, because of the different slopes on the rising (left inset in Fig. 5B) and falling (right inset in Fig. 5B) edges, the barrier is narrower, and the resonance lifetime is shorter for v′ = 1 than for v′ = 0. Fig. 5 Electron dynamics induced by structural changes. (A) Absolute square of the transition matrix element dσu,l=3,zX at an electron kinetic energy of 8.2 eV, multiplied by the initial (χi;v = 0) and final (χf;v) vibrational wave functions, Pv,v, as a function of the internuclear distance R. The red and black curves correspond to transitions from the v = 0 level in the neutral ground state to the v′ = 1 and the v′ = 0 levels of the X state, respectively. (B) Potential felt by an electron escaping from an N2 molecule with internuclear distance Rv'=1 = 1.09 Å (red) and Rv'=0 = 1.113 Å (black), calculated at the same of level of theory as the dipole matrix elements. These Kohn-Sham potentials are shown along the internuclear axis, with the two wells representing the two N atoms. The origin is placed at the center of mass of the molecular ion. The dashed lines show the corresponding resonance energies. As shown in the insets, the barrier width depends on R, hence, on the vibrational state of the ion. Figure 5C shows the photoionization delays (τ1) as a function of photon energy obtained at the internuclear distances Rv'=1 = 1.09 Å (red) and Rv'=0 = 1.113 Å (black) and the relative delay between them, τ1X(Rv'=1)τ1X(Rv'=0) (green). The resonance lifetimes (full circles) and widths (horizontal bar) calculated in the Wentzel-Kramers-Brillouin (WKB) approximation from the two potentials presented in Fig. 5B are also shown. Figure 5C shows the photoionization delays resulting from the one-photon dipole transition matrix elements calculated at the two abovementioned internuclear distances as a function of photon energy. The difference in internuclear distance leads to a noticeable shift in the position of the corresponding maxima of photoionization delays, in agreement with the difference in resonance energy discussed above and the positions of the maxima calculated using the Wentzel-Kramers-Brillouin (WKB) approximation, indicated by the black and red dots. In addition, the energy range where the photoionization delays vary substantially is slightly broader for v′ = 1 than for v′ = 0, a direct consequence of the shorter lifetime for v′ = 1. This can also be seen by the horizontal bars, representing the widths of resonance obtained with the WKB approximation. Both effects, due to the shape resonance, contribute to a variation of the time delay difference between the v′ = 1 and v′ = 0 states as indicated by the green curve. Such a simple model predicts the main features of the experimental results in Fig. 3C, particularly the change of sign of the relative photoionization delay at low photon energy and the maximum at approximately the resonance position. It is worth noting that the resonance lifetimes obtained from the WKB model, 139.6 and 163 as for the v′ = 1 and v′ = 0 levels, respectively, are much smaller than the corresponding vibrational period, which is of the order of 16 fs (see the Supplementary Materials for details). As a consequence, the nuclei barely move during the ionization process, thus supporting the above analysis. In summary, we measured vibrationally resolved molecular photoionization time delays between the X and A electronic states in N2 across the 3σg1 shape resonance using attosecond interferometry. This enabled us to capture the changes associated with nuclear motion on the centrifugal barrier seen by the escaping photoelectron. The observation of such changes goes beyond the usual Franck-Condon approximation, which assumes that electronic transitions are instantaneous in comparison with nuclear motion. By combining attosecond time resolution with high spectral resolution, we were able to break the temporal frontier beyond which the signature of the “slow” nuclear motion in molecular photoionization can be seen and quantified. This approach should allow investigating the effect of nuclear motion on a variety of electronic processes in more complex molecular systems at the subfemtosecond time scale. Experimental methods The output of a Ti:Sapphire laser system delivering NIR pulses around 800 nm with 5-mJ pulse energy at 1-kHz repetition rate was sent to an actively stabilized Mach-Zehnder–type interferometer. In the “pump” arm, the NIR pulses were focused into a gas cell containing argon atoms to produce a train of attosecond XUV pulses via high-order harmonic generation. A 200-nm-thick aluminum foil was used to filter out the copropagating NIR pulse. The bandwidth of the driving NIR pulse was kept around 50 nm, which ensured the generation of high-order harmonics having full width at half maximum of about 150 meV. This is notably smaller than the energy separation (267 meV) between the lowest vibrational levels (v′ = 1 and v′ = 0) of the X electronic state in N2+ (see the Supplementary Materials), which is essential to vibrationally resolve the measured PES. In the “probe” arm, the NIR pulses could be delayed relative to the XUV pulses by a piezoelectric-controlled delay stage and blocked at each alternate shot by a mechanical chopper. After recombination, the collinearly propagating XUV and NIR pulses were focused by a toroidal mirror into an effusive gas jet of N2. The emitted photoelectrons were detected by a 2-m-long magnetic bottle electron spectrometer, with a 4π solid angle collection. We estimated an average NIR intensity of 8 × 1011 W/cm2 in the interaction region. Data analysis Because of the spectral congestion between ionization by XUV and XUV + NIR radiation to the three different states (X, A, and B) of the N2+ ion (see fig. S1), a careful analysis of the experimental data was needed. Hence, a spectrally resolved variant of the RABBIT protocol, called Rainbow RABBIT (37), was used to analyze the experimental data. Despite the overlap between the X and B states, we could determine the phases of the sideband oscillations by carefully choosing the region with least possible spectral overlap. In addition, at relatively high photon energies (>25 eV), the photoionization cross section to the B state is much smaller than to the X state (29). Therefore, the measured time delays for the X state can be considered to be effectively free from any possible spectral contamination from the B state. For every sideband, a Fourier transform was performed to make sure that the sideband oscillation did not include frequency components higher than 2ω0. The uncertainty σXA) for each measurement of the molecular photoionization time delay τXA) was obtained from the fit of the RABBIT oscillation to a cosine function (see Eq. 1). The corresponding uncertainty on the relative time delay, τX − τA, can be expressed asσXA=σX2+σA2(2) An identical procedure was used to calculate the uncertainties for the relative time delays between two vibrational levels of the same electronic state. The final experimental values shown in Fig. 3 were obtained from a weighted average of the data points from several sets of measurements. For N measurements yielding N data points: k1, k2, …, kN with corresponding uncertainties: σ1, σ2, …, σN, the weighted average can be calculated ask¯=i=1Nwikii=1Nwi(3)where wi=1/σi2 is the weight. The error bars indicated in Fig. 3 are the weighted standard deviation, defined asσk¯=Ni=1Nwi(kik¯)2(N1)i=1Nwi(4) Supplementary material for this article is available at Acknowledgments: S.N. acknowledges fruitful discussions with V. Loriot and F. Lépine. Calculations were performed at the CCC-UAM and the Marenostrum Supercomputer Center. Funding: We acknowledge the support from the ERC advanced grant PALP-339253, the Swedish Research Council (grant no. 2013-8185), the Knut and Alice Wallenberg Foundation, and the European COST Action AttoChem (CA18222). E.P., A.P., and F.M. acknowledge the support of the MINECO project FIS2016-77889-R. F.M. acknowledges support from the “Severo Ochoa” Programme for Centres of Excellence in R&D (MINECO, grant SEV-2016-0686) and the “María de Maeztu” Programme for Units of Excellence in R&D (CEX2018-000805-M). A.P. acknowledges the support of a Ramón y Cajal contract (RYC-2014-16706) from the Ministerio de Economía y Competitividad (Spain). E.P. acknowledges the support of a Juan de la Cierva contract (IJCI-2015-26997) from the Ministerio de Economía y Competitividad (Spain). Author contributions: S.N. conceived the experiment, the planning for which was further improved by inputs from S.Z., A.L.H., and M.G. S.N., S.Z., D.B., M.I., L.N., and C.L.A. carried out the experiment. S.N. and S.Z. performed the data analysis. E.P. performed the theoretical calculations under the supervision of A.P., P.D., and F.M. The model was developed by E.P., A.P., and F.M. R.J.S. and R.F. provided the magnetic bottle electron spectrometer. M.G., A.L.H., and F.M. supervised the project. S.N., A.L.H., M.G., and F.M. wrote the manuscript with inputs from all the authors. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. Stay Connected to Science Advances Navigate This Article
e575489c761e1952
I'm trying to understand the Schrödinger equation and solving it a bit better, and I'm running into some doubts while coding, even though I am adapting the code to this situation. Also I tried asking this on the physics stack exchange and I was directed here. this is the situation I am trying to solve: enter image description here It's supposed to be an infinite well with a barrier in the middle. alpha b is just b multiplied by some constant. I have a running code, but since I am basing it off of someone else's code there's one setting I don't seem to understand (probably some basic concept). I'm having trouble defining the Wave function initial states so how can I define the initial state of psi? I'm adding the code in case if anyone wants to play with this code. from pylab import * from scipy.integrate import odeint from scipy.optimize import brentq L= B+a Vmax= 50 Vpot = False N = 1000 # number of points to take psi = np.zeros([N,2]) # Wave function values and its derivative (psi and psi') psi0 = array([0,1]) # Wave function initial states Vo = 50 E = 0.0 # global variable Energy needed for Sch.Eq, changed in function "Wave function" b = L # point outside of well where we need to check if the function diverges x = linspace(-B-a, L, N) # x-axis def V(x): #Potential function in the finite square well. if -a <=x <=a: val = Vo elif x<=-a-B: val = Vmax elif x>=L: val = Vmax val = 0 if Vpot==True: if -a-B-(10/N) < x <= L+(1/N): return val def SE(psi, x): Returns derivatives for the 1D schrodinger eq. Requires global value E to be set somewhere. State0 is first derivative of the wave function psi, and state1 is its second derivative. state0 = psi[1] state1 = 2.0*(V(x) - E)*psi[0] return array([state0, state1]) def Wave_function(energy): Calculates wave function psi for the given value of energy E and returns value at point b global psi global E E = energy psi = odeint(SE, psi0, x) return psi[-1,0] def find_all_zeroes(x,y): Gives all zeroes in y = Psi(x) all_zeroes = [] s = sign(y) for i in range(len(y)-1): if s[i]+s[i+1] == 0: zero = brentq(Wave_function, x[i], x[i+1]) return all_zeroes def find_analytic_energies(en): Calculates Energy values for the finite square well using analytical model (Griffiths, Introduction to Quantum Mechanics, 1st edition, page 62.) z = sqrt(2*en) z0 = sqrt(2*Vo) z_zeroes = [] f_sym = lambda z: tan(z)-sqrt((z0/z)**2-1) # Formula 2.138, symmetrical case f_asym = lambda z: -1/tan(z)-sqrt((z0/z)**2-1) # Formula 2.138, antisymmetrical case # first find the zeroes for the symmetrical case s = sign(f_sym(z)) for i in range(len(s)-1): # find zeroes of this crazy function zero = brentq(f_sym, z[i], z[i+1]) print ("Energies from the analyitical model are: ") print ("Symmetrical case)") for i in range(0, len(z_zeroes),2): # discard z=(2n-1)pi/2 solutions cause that's where tan(z) is discontinous print ("%.4f" %(z_zeroes[i]**2/2)) # Now for the asymmetrical z_zeroes = [] s = sign(f_asym(z)) zero = brentq(f_asym, z[i], z[i+1]) print ("(Antisymmetrical case)") for i in range(0, len(z_zeroes),2): # discard z=npi solutions cause that's where ctg(z) is discontinous def main(): # main program en = linspace(0, Vo, 1000) # vector of energies where we look for the stable states psi_b = [] # vector of wave function at x = b for all of the energies in en for e1 in en: psi_b.append(Wave_function(e1)) # for each energy e1 find the the psi(x) at x = b E_zeroes = find_all_zeroes(en, psi_b) # now find the energies where psi(b) = 0 # Print energies for the bound states print ("Energies for the bound states are: ") for E in E_zeroes: print ("%.2f" %E) # Print energies of each bound state from the analytical model # Plot wave function values at b vs energy vector title('Values of the $\Psi(b)$ vs. Energy') xlabel('Energy, $E/V_0$') ylabel('$\Psi(x = b)$', rotation='horizontal') for E in E_zeroes: plot(E/Vo, [0], 'go') annotate("E = %.2f"%E, xy = (E/Vo, 0), xytext=(E/Vo, 30)) # Plot the wavefunctions for first 4 eigenstates for E in E_zeroes[0:4]: plot(x, psi[:,0], label="E = %.2f"%E) legend(loc="upper right") title('Wave function') xlabel('x, $x/L$') ylabel('$\Psi(x)$', rotation='horizontal', fontsize = 15) pot =[] for i in x: if __name__ == "__main__": • $\begingroup$ I've solved the finite square well potential using scipy.integrate.solve_bvp here. $\endgroup$ – Russell Anderson Aug 13 '19 at 8:41 • $\begingroup$ From experience it’s much easier and faster to solve this equation with tridiagonal matrix approach. See here, docs.solcore.solar/en/master/Examples/example_QWs.html (I was one of the authors of this module) $\endgroup$ – boyfarrell Aug 16 '19 at 16:57 Although this site is not intended for questions of the "please debug my program" kind, I believe that what's needed here is a clarification of the scientific computing aspects. The method you are adopting, from the link that you gave, is the shooting method. The program uses a numerical ordinary differential equation solver for the Schrodinger equation, which takes initial conditions $\psi(x_0)$ and $\psi'(x_0)$, for some chosen starting point $x_0$, and integrates forward in $x$. The output is a wavefunction at discrete values of $x$; the program carries out this integration for a range of energies $E$. Although your question refers to an infinite potential well, your diagram, your program, and your original source all deal with a finite potential well. So I'm going to stick with that. (To be honest, in the original program given at https://helentronica.com/2014/09/04/quantum-mechanics-with-the-python/, I can't be sure that the sign of $V_0$ is correct! but I haven't looked closely). If you really mean an infinite well, then there are only slight differences, which I return to at the end. For simplicity I'm also going to stick with the symmetric case, $\alpha=1$. The main part of the program is mainly interested in checking the value of $\psi(b)$ for a position $x=b$ lying far outside the potential well; this is why the Wavefunction function only returns the last element of the computed $\psi$ array. Only values of $E$ which give non-divergent wavefunctions are wanted; in fact, the boundary condition that we want to enforce is $\psi(x)\rightarrow 0$ as $x\rightarrow\infty$. The program uses the array of values of $\psi(b)$ for different energies to determine intervals of $E$ where $\psi(b)$ changes sign, and then uses Brent's method to refine the value of $E$ for which $\psi(b)=0$. To a good approximation, this will be one of the allowable eigenvalues and the corresponding $\psi(x)$ will be the eigenfunction. The "state" which is required by the ODE solver, calculated by the SE function, will always be an array having two elements: the first derivative and the second derivative of $\psi$. This is because the Schrodinger equation is a second-order differential equation, which is being tackled here as two first order equations: \begin{align*} \frac{d\psi}{dx} &= \psi'(x) \\ \frac{d\psi'}{dx} &= 2[V(x)-E]\psi(x) \end{align*} in your units. The psi array stores the values of $\psi(x)$ and $\psi'(x)$; the SE function returns the two quantities on the right hand side of these equations. So, the changes you have made to the SE function, returning 6 values, are on the wrong track. The value of the second derivative in the different regions of the potential well will depend on $x$ through the function $V(x)$, which you set up at the start of the program. So nothing in SE needs changing from the original code which you downloaded. The other aspects needing attention are the ranges of $x$. In the original code you started working with, the width of the well is determined by $L$ (which was set to 1 in the sample program) and the check on $\psi(b)$ was made at $b\gg L$ ($b$ was set to 2 in the example). You need to do something similar. Various starting points $x_0$ (for the shooting integration) were considered in the original example; for a symmetric potential well, there is something to be said for starting at the midpoint, $x=0$, with either $\psi(0)=0$ or $\psi'(0)=0$ to produce, respectively, antisymmetric or symmetric solutions. But you can also start at $-b$, where $-b\ll -L$, and this might be better in your case. Finally, as someone noted in this answer https://physics.stackexchange.com/a/438299/197851 to your question on Physics SE, the locations of the edges of the well in your diagram do not match the same locations in your code. So, to summarize: there are really no fundamental changes needed to the original program, just changing $V(x)$ should work. The SE routine does not need changing from the originally downloaded version. You need to pay more attention to the limits of the $x$ range used in the shooting method: you should end at a position which is outside the potential well on the right, and you might start from a position outside the well, on the left. I recommend starting again with the original program, where the well extends from $-L$ to $+L$ where $L=1$, and adding a symmetrical bump around the origin from $-a$ to $+a$, where $a<1$, keeping $b=2$. If you really want to deal with an infinite potential well, then you should set $b=L$ and enforce the boundary condition $\psi(b)=0$. In this case it also makes sense to start shooting at $x=-b$, with $\psi(-b)=0$ and $\psi'(b)$ nonzero. Hopefully this helps. As I indicated, this site isn't for program debugging, but if you don't follow the idea behind the method you are using, I'm happy to try and clarify. • $\begingroup$ what I was hoping for was an answer like yours, sorry for not expressing myself correctly $\endgroup$ – user169808 Nov 5 '18 at 16:47 • $\begingroup$ No worries. The intention of my first paragraph was to make clear that (IMHO) this question falls within the scope of the site, even though one needs to look at the code. Anyway, feel free to consider the answer, decide whether it is helpful, and whether to accept it or not (see scicomp.stackexchange.com/help/accepted-answer). There is no hurry: better answers from others might come along. Also, as I said, if it needs more clarification, just ask. $\endgroup$ – user28077 Nov 5 '18 at 16:56 • $\begingroup$ thanks, but you clarified some things for me. I'm trying to learn how to do these sorts of things on python, on my own, so I'm bound to be a bit lost. Thanks for your help. $\endgroup$ – user169808 Nov 5 '18 at 17:01 • $\begingroup$ hi, I'm trying to normalize the wave function but when I integrate the result I get is not equal to 1. I am integrating abs(psi)^2 from one side of the well to the other. Is there something I am missing to normalize these wavefunctions? In the end I want to calculate $\Delta x \Delta p$ $\endgroup$ – user169808 Nov 12 '18 at 12:21 • $\begingroup$ The Schrödinger equation is a linear equation, so if $\psi(x)$ satisfies it, so will $c\psi(x)$ where $c$ is any constant. This is equally true for your solutions: if you doubled the initial values of $\psi$ and $\psi'$, and redid the "shooting", you would just multiply your whole solution by $2$. It is quite usual to normalize the wavefunction "by hand" afterwards: calculate $C=\int |\psi(x)|^2 dx$ across the whole range of $x$, and then replace $\psi(x)\rightarrow \psi(x)/\sqrt{C}$. $\endgroup$ – user28077 Nov 12 '18 at 12:42 Your Answer
344098c593b0fea8
Submitting Campus Daytona Beach Department of Mathematics Document Type Publication/Presentation Date A review of three-dimensional waves on deep-water is presented. Three forms of three-dimensionality, namely oblique, forced and spontaneous types, are identified. An alternative formulation for these three-dimensional waves is given through cubic nonlinear Schrödinger equation. The periodic solutions of the cubic nonlinear Schrödinger equation are found using Weierstrass elliptic ℘ functions. It is shown that the classification of solutions depends on the boundary conditions, wavenumber and frequency. For certain parameters, Weierstrass ℘ functions are reduced to periodic, hyperbolic or Jacobi elliptic functions. It is demonstrated that some of these solutions do not have any physical significance. An analytical solution of cubic nonlinear Schrödinger equation with wind forcing is also obtained which results in how groups of waves are generated on the surface of deep-water in the ocean. In this case, the dependency on the energy-transfer parameter, from wind to waves, makes either the groups of wave to grow initially and eventually dissipate, or simply decay or grow in time. Publication Title Advances and Applications in Fluid Dynamics Pushpa Publishing House
48be2f2e5ef0fa72
Saturday, April 8, 2017 Saturday Night Radio Drama - Philip Doherty - A Golden Triangle Author Philip Doherty earns his second such award [ First Place in the the P. J. O'Connor Prize] with the interrupted monologues that make up A GOLDEN TRIANGLE; and a parallel double deployment in production features a distinguished father-and-daughter team, Brendan and Neilí Conroy, along with actor Gerry McCann, in the casting of the trio. Mattie was played by Brendan Conroy, Jill was Neilí Conroy, Edek was played by Gerry McCann It's another Irish national radio production, which means you have to download it to listen to it.  I listened to an interview with the late Edward Albee last week in which he said that he thought most plays were too long.  Listening to this, I wondered if that's one of the reasons I like radio plays so much, they tend to be an hour or less.  Another reason is that you get to hear a lot of plays by writers you'd never, ever see in a stage production or acted out in a film.  It's so much cheaper to do a play this way, especially a non-conventional kind of play.   I don't miss the visuals.   Charlie Pierce Is Capable of Greatness His piece from yesterday about Trump's smoke screen* attack on a Syrian airbase Friday, Washington Is Void of Any Sense of Restraint, and the politics of authorizing war is certainly one of the five most important things I've read this year, perhaps this decade.  If you don't have the time to wade through my piece below, Charles Pierce's piece is more important and much shorter.  * Is it true that they warned the Assad regime so they could remove anything they didn't want to get blown to smithereens?   I can point out that in 2013, Trump and other Republican-fascist-racists were accusing Barack Obama of going to have been guilty of doing exactly what it is so obvious that Trump did.  I still wonder if it might not have been set up by Putin, getting one of his lesser assets, Assad, to provide his major asset, Trump, with a way to deflect attention from the treason of Trump and his regime with the Russian oligarchs.   "The feeling of security given by the reductionist approach is in fact illusory" "Such a situation would put a heavy strain on our faith in our theories and on our belief in the reality of the concepts which we form."  Eugene Wigner The atheist religion needs to believe in something that no sensible person,  including many atheists, would ever believe, that just because someone, somewhere can think of an equation for something that that means that that equation is about something real in reality.  Which, as superstition, ranks right up there with the Republican-fascist conspiracy theories about what happens in the basement of a pizza joint that doesn't have a basement.  Yes, someone didn't like what I wrote about that last week. I think it's possible that what physicists actually found when they couldn't make their equations come up with an absolute measure of the position of an electron in quantum physics was probably a hard fact about using mathematics as a powerful but, ultimately and inevitably a limited tool for addressing physical reality.  When you press the tool to that limit, perhaps you will find that it doesn't reveal the hard, solid, dependable view of the universe that constitutes a classical concept of objective reality that is external from the observer.   Which isn't some huge shock, it's been known to be true of the act of measurement ever since people started using standard measuring conventions, most obvious when trying to apply those standard measuring conventions against some physical object or, say, a plot of land or, especially, something which changes in dimension. There has always been a built in margin of error in any act of measurement, sometimes you can't even really measure or even really estimate what that margin of error is.  That those exist in more sophisticated acts of measurement involving more than one possible vector in which those discrepancies between the act of measurement to cloud the issues isn't really shocking.  That it shocked the physicists who had gotten into the habit of believing that the impressive physics that Newton and his successors came up with showed us a solid, reliable, and, most cherished of all by them, an OBJECTIVE view of reality only shows that what they were using physics for up till that point could work within the range of uncertainty that it always had included.   That range of uncertainty was always acceptable in terms of the human use of knowledge.  It was always there, the idea that what classical physics gave was an objective view of reality was, itself, an accepted fudging of realty. An aspect of how that faith in the objective reality of what physics showed us was through the physicists' often very useful practice of reducing the complexity of what they studied and observed, either through choosing a very simple object and a few aspects of its movement to study or though ignoring the range of possible influences on it movement.  As the means of measuring the objects they were studying and aspects of the objects became sufficiently precise and sophisticated in the later 19th century, problems with the imprecision of measurement became more relevant and important.  And, as it forced them to address the fact that the previous standards didn't actually give the absolute, total, and entirely dependably objective view of reality, it impinged on their most cherished beliefs gained by the really impressive usefulness of those practices up till that point.  I can imagine for people who were raised with, trained with and had a professional and, most of all a huge emotional stake in that faith in the potency of physics, facing those discrepancies was extremely painful.  Some, like Plank and Einstein, started doing physics which took the newly faced imprecision of measurement into account and did physics a new way.  But the need to maintain the, yes, if you want to put it that way, "religious" faith that what they were doing provided the kind of objective view of reality that was the cherished belief of physicists and their fans in the lay population persists. I think the multi-verse phenomenon among physicists, perhaps the string theoriests, the M theorists, etc. are all trying to push against the limits of physics and coming up with all manner of bizarre ways to do that.  It reminds me of the epicycles and other things that late classical and medieval astronomy came up with to maintain the earth-centered cosmological system they liked so much only on both speed and acid.  I think the social sciences, trying to measure things, especially social phenomena that couldn't be reduced or really located and which were not static reality, has come up with many even more decadent claims presented as if they had any reliability at all when they don't.  Even trying to come up with a mathematical view of one human being, even just their physical body, runs up against limits almost at the beginning. And if you think that's far fetched, while writing that I remembered that it was something which Eugene Wigner implied in his famous paper, The Unreasonable Effectiveness of Mathematics in the Physical Sciences Let us consider a few examples of "false" theories which give, in view of their falseness, alarmingly accurate descriptions of groups of phenomena. With some goodwill, one can dismiss some of the evidence which these examples provide. The success of Bohr's early and pioneering ideas on the atom was always a rather narrow one and the same applies to Ptolemy's epicycles. Only, if you're going to consider a few examples of false theories which give a few of "alarmingly accurate descriptions of groups of phenomena" you certainly should consider the many examples of people with the habit of believing in the ultimate potency of mathematical descriptions, especially in phenomena that are far more complex than what Bohr or Ptolemey's studies dealt with.  When what is allegedly being measured and, so described is far more complex, the chances that what is produced will turn out to be alarmingly inaccurate though the faith in science will lead people to believe it to be reality until reality bites back.  In the biological sciences of the late 19th century until even today, that faith has gotten millions of people killed, oppressed, discriminated against.  The next time you hear some behavioral scientist talk about the "difference" between "female and male brains" gender groups, races, ethnic groups, etc. when you read a sociological survey or anything that relies on reported responses of subjects, such as an opinion survey, you're just hearing that faith being pushed entirely past where it is warranted at all.   The "success" of some of it, such as the recently tested faith in the efficacy of polling in predicting presidential elections was most likely always an illusion based in that old and baseless faith in the ability of numbers to give you an objective view of reality. Then there is this, from the end of Wigner's paper. If viewed from our real vantage point, the situation presented by the free-electron theory is irritating but is not likely to forebode any inconsistencies which are unsurmountable for us. The free-electron theory raises doubts as to how much we should trust numerical agreement between theory and experiment as evidence for the correctness of the theory. We are used to such doubts. As a physicist and mathematician, Wigner may have had a very naive view of the complexity you fast run into when you're studying living beings, even in some of their simpler aspects.  For those of you who may not have read the post here from a few weeks back, I'll point out what the eminent biologist Lynn Margulis said about an exchange she had with the eminent geneticist Richard Lewontin about mathematical modeling in genetics which runs up against the discrepancies and the limits of that faith in the efficacy of mathematics a lot faster and more consequentially than happens in studying electrons. And I'll point out, they're dealing with something far simpler than the massively more complex whole organisms and stupendously more complex species and ecological systems in which those genes exist.   The reason to believe that reductionist practices and mathematical modeling will work in studying and understanding, even in a partial or general way, those far surpasses unreasonable belief. As I recall Margulis called it "religious faith" somewhere, I think that much religious faith is far more modest and far more reasonable.  A more apt, accurate and honest comparison wouldn't be to religion, it would be to pseudo-scientific ideologies which sprang up in the 19th and 20th centuries among those atheists who had that faith in the ultimate power of mathematics and science which has led their fellow atheists (and others) in science astray, as well. I think the creation of jillions and jillions of universes by physicists frustrated with the fact that they can't have an objective view of reality is sort of like the ultimate expression of what led Ptolemey to come up with his means of making things work when they didn't.  Only he was addressing things that could be seen to be there and so he couldn't come up with magic to make it work.  Modern physics reached a point after World War Two when it chose to allow that.  I think if Bertrand Russell in the late 1920s hadn't used his hatred of religion to make an analogy to the decadence he feared modern physics would lead to, he might have been more accurate about the nature of that decadence. And it's a quite materialistic and quite atheistic decadence, based on faith in a far different, far more human and totally material set of desiderata. * I'll quote the mathematician René Thom, again The excellent beginning made by quantum mechanics with the hydrogen atom peters out slowly in the sands of approximations in as much as we move toward more complex situations…. This decline in the efficiency of mathematical algorithms accelerates when we go into chemistry.   The interactions between two molecules of any degree of complexity evades precise mathematical description … In biology, if we make exceptions of the theory of population and of formal genetics, the use of mathematics is confined to modeling a few local situations (transmission of nerve impulses, blood flow in the arteries, etc.)  of slight theoretical interest and limited practical value… The relatively rapid degeneration of the possible use of mathematics when one moves from physics to biology is certainly known among specialists, but there is a reluctance to reveal it to the public at large … The feeling of security given by the reductionist approach is in fact illusory. And refusing to face that has brought science into complete decadence. Hate Update:  I think the stuff Comte and Marx came up with was pretty bad and the political-social-legal application of natural selection was far worse.  The ultimate decadence among atheist-materialists today, the demotion of the mind to being a product of chance, random events, randomly present chemicals and physical phenomena in our skulls, the demotion of our minds into the total impeaching of the significance of our thoughts - though, they insist, out of any rational coherence,  not their own thoughts - is just more of the same misplaced faith that is the theme of this post.   Science works best when it is honestly applied to things it can honestly be applied to and the claims made for the results are honest about the range in which those are reliable and the limit within which it is reliable.  Very little science is presented with that honesty.  A hell of a lot of what is included within "science" today is an atheist equivalent of the "astrology and necromancy" that Bertrand Russell talked about in the wake of the robust belief in the Roman Catholic church. When the robustness of the Catholic faith decayed at the time of the Renaissance, it tended to be replaced by astrology and necromancy, and in like manner we must expect the decay of the scientific faith to lead to a recrudescence of pre-scientific superstitions. Only the very faith that Russell had in the total efficacy of science and mathematics had already produced far more dangerous and far more potent superstitions which were already piling up corpses.  That pile had started in the Reign of Terror carried out as part of the very "enlightenment" Russell feared was passing away.  He had seen more of it in his disillusioning visit to the Soviet Union nine years before he wrote that article, though his faith in science and his a-historical upper class Brit hatred of religion led him to entirely mischaracterize the incipient horrors he already saw forming under Lenin in his book The Practice and Theory of Bolshevism.  But that would take an even longer post to go through.  Whatever you can say about the horrific results of Marxism, it wasn't religious superstition that it was based in, it was ultimately based in the very faith in the efficacy and monist potency of materialism and science Russell held that produced it.   And the same can be said about Nazism which was based, entirely, on a belief in natural selection and the eugenics which blamed civilization and the morality of the Golden Rule as impeding its violent, brutal culling out of the "weaker members" of the human species. Hate Update 2:  Bertrand Russell's chapter treating The Materialistic Theory of History is absolutely full of double-talk, ahistorical garbage, bigotry, etc.  He literally contradicts himself throughout it.  His motive wasn't honesty about the developing horror of Soviet Marxim, it was to attribute everything about it to things he didn't like.  You can still read similar stuff with similar motives on the atheist, alleged left, today. Friday, April 7, 2017 GYÖRGY LIGETI - Sonata for Cello solo Mathias Johansen - Cello The Same With The Score Arranged for Guitar by Kostas Tosidis Kostas Tosidis, guitar The Sonata has an interesting story, the first movement composed as an imaginary dialogue between a man and a woman - written for a cellist Ligeti was in love with but who he never told he was in love with.  Later in 1953. another cellist asked him to write a piece so he added the second movement the Cappricio to it.  However, when they applied to the Hungarian Composers Union (or more honestly Soviet occupation composers censors) they banned its public performance and publication, allowing only a recording for radio which was never played.  The piece was put aside and forgotten when Ligeti walked out of the Soviet bloc and began what he considered his mature compositional period free of censorship.  The piece was revived and received its first concert performance in 1983 and was published in 1990. I assume Kostas Tosidis had either Ligeti's permission to make the transcription or that of his estate. I think it's pretty successful. Duncan's dopes never come here to read what was really said. They don't even read what he says on his own blog. Which is why he stopped saying things there.   Eschaton is a thing of the past.  Which would be ironic if the inmates knew what the word means. Having Putin's Puppet in the Presidency Makes Things A Lot Crazier Than They Would Be If A Traitor Wasn't President It is one of the insane things about having the Republican-fascists in power under the patronage of Vladimir Putin that who knows what's possible.  The ever increasing knowledge of how far he went to put Donald Trump in office means that he obviously sees him as an asset.  With the unprecedented and open corruption of Trump and his ties to the Russian mafia that runs the country it is clear that he is a partner with Putin.   That makes what's been going on in Syria especially strange because Putin is also the biggest backer of the Syrian regime and an active participant in the war of Bashar al-Assad.  Then there was the recent attack in the St. Petersburg subway which, at least several days ago was believed to have been retaliation for what the Putin regime is doing in Syria.   I would find it hard to believe that al-Assad would commit the atrocity of the gas attack this week without at least consultation with the Putin government.  It's even possible that he would want to get the OK from Putin to do it - if, in fact, it was his regime that did it and not the terrorist groups who are known to have chemical weapons in Syria.   What is especially insane is that any scenario you can think up is possibly true.  I was wondering earlier this morning if Putin decided that a gas attack, allowing Donald Trump to launch a military attack might reduce the pressure on what must be his major asset outside of Russia, the Trump regime.  Americans are as stupid about rallying around any Republican military action as Russians have been for Putin.  He's used those kinds of things, including terror incidents some believe his own government staged, to consolidate power.  Maybe he's decided that Trump can get him more of what he wants than Assad, though he might be figuring on keeping both.  Does that make more sense than that Assad would do something he would have to know risked major involvement with the biggest military in the world?   Something that would prove that he and possibly he and the Putin regime lied about him destroying his chemical weapons stockpile? Or maybe Putin had no involvement in any of it.  Who knows? I doubt anyone in the American media does.  One thing I do know is that a lot of the craziness comes from us having a person running the government who is an asset of Vladimir Putin, who also has assets leading the State Department and who are in other places in the Trump regime.  If they weren't there, some of the issues making this so crazy wouldn't be there.  If Trump and his cronies weren't in place, so many of them with ties to the Russian kleptocrats - including his son-in-law, the de facto Secretary of State, his daughter and his sons.    Update:  Says the guy who was stunned to find out in recent weeks that 90 year olds dying is, you know, a thing.   I wonder what he said when Irwin Corey died at 102.  I can imagine that came as a huge unfair shock to him as well with a good dose of histrionic complaint. By the way, these are the same people who accuse religious people of fearing death - that is when they aren't saying that religion is a death cult.   Heads I win, tails you lose is the standard operating procedure for these dolts.  Never let them get away with it. Thursday, April 6, 2017 Kenneth Gaburo - Maledetto "Lingua II: Maledetto" by Kenneth Gaburo performed by Stephen Miles, R. L. Silver, Kartina Amin, Lisa Pokorski, Jason Rosenberg, Steven Jones, and Nadia Stegman at New Music New College's Speech Acts concert, November 18, 2000 in the Mildred Sainer Pavilion at New College of Florida. I'd only ever heard this on the old CRI LP recording of a performance from the 1970s.  I hope it's been performed since then.   It's been way too long since I went to an avant garde music concert. Though Gaburo wasn't exactly that since I don't think he sparked a movement.  He was a really interesting composer-theater creator. I was looking for something to curse the Republican-fascists with, this isn't exactly what I had in mind but it will do for now. The Shade of Roger Taney is Smiling From Hell Image result for justice taney I remember when I first went online and I started condemning the Supreme Court as a corrupt instrument which has, with the fewest of exceptions, been a secure means of preventing equality and justice, a lot of liberals got really upset, nervous, skittish.   The older ones invoked the Republican-fascist campaign calling for the impeachment of Earl Warren, bill boards and all.  The friggin' Supreme Court had a little burp of liberalism for a few years in the later 1950s and 60s and people wanted to pretend that was the status quo of the most undemocratic branch of the government. Well, along with the nuke the Republican-fascists set off today, including the skank from my home state, Susan Collins, I say the days of treating that thoroughly politicized body with kid gloves is well over and good. Neil Gorsuch has to be exposed every time he does the slightest thing that can be turned against the Republican-fascists who put him on the court, he will give us lots and lots to use.  He will regularly vote to screw the majority of people on behalf of the oligarchs who begot and nurtured the fascist, the Federalist fascists, the American Enterprise Institute, etc.  And the oligarchs. Democrats should feature his every outrage in a constant stream of political messaging that will turn him into the most hated Supreme since the putrid Roger B. Taney of Dred Scott infamy.  He will probably do everything he can to bring us back to that period.   There should also be a move to limit the term of Supreme Court members to ten years.  For any great justice who you would like to keep on for life there have been many times more who should never have been there to start with.  The Founders were total novice and amateurs who could never have imagined a ruling elite with no sense of honor, such as the one we have in power now. They couldn't have imagined the kind of scum such as Gorsuch, Thomas, Scalia and Alito could be put onto the court, and if they could, they were not only naive, they were jerks. I Approve This Message from Samantha Bee - We Told You So: Russian Hacking It's a real sign that American democracy is in a fatal tail spin when the "free press" only tells you the truth in the context of a comedy show.   I really friggin' hate the free speech - free press industry that let the media lie us into fascism.  Oh, yeah, and we had to have a Canadian tell us.  Damn, I wish I'd moved there when my Latin teacher tried to talk me into it. I Think Jay Semko's Mouse In A Hole Is Great Musical Theater In Six Minutes Let me guess, you don't like it because he's not from New York or LA and has worked mostly in Western Canada.  I don't care.  Your comment dismissing his song reminds me of the story that his friends told about Charlie Parker who would feed coins into jukeboxes listening to Hank Williams songs.  When his friends complained asking him, the ultimate - no far more than ultimate hipster how he could listen to that corny country stuff he told them to listen to the stories. Yeah, I do think Jay Semko's song is better than that rehash of Civil War sentimentality - which I finally listened to and, while it isn't exactly tripe, I've heard it all before, many times.  It suffers from being safely in the sentimentalized past not the present with all the dangerous issues involved.  I think, by the way, that Semko's fellow Canadian Robbie Robertson's act of imagination as heard on The Brown Album is ever so much more impressive. For anyone who missed it below here's a link. Update:  Dopey is railing at me that I only listened to the words and ignored music in the.... um,  "masterpiece, he railed at me about for most of the past week.  He's wrong.  If I had to judge it on the music I'd note that it's the same stuff I heard in the 1960s rehashed for the past half century thousands of times instead of the mere dozens of times I've heard the same ....un.... literary content.  Banal junk. Update 2:  Here's Semko singing it with just his guitar at a house concert in Victoria BC.  Notice all of those different voices you thought were different voices were, actually, his.   Yes, it is great musical theater. Update 3:  Uh, Stupy, I hate to disappoint you but you're not the only idiot who tries to spam up my blog comments.  Maybe you can go back to Baby Blue and hunt down your rival.  I think I'll keep up the ban on posting your comments as well as your buddies' unless I can use their content.  I might get to the point where I don't bother at all with them.  What a loss of attention that will be to you. The Clearest Thing In The Republican Attempt To Cover Up For Trump Is Republican Racism The list of Republicans piling on the smoke to shield Trump from his lie that President Obama spied on him and his traitorous thugs in Trump Tower all comes together in their decision to blame everything on Susan Rice.  Trump, his thugs, Lindsay Graham, Deven Nunes, Fox, the Breitbarts and even such crap as work for the corporate media like Bloomberg chose her for the same reasons they went after her over nothing in regard to the Benghazi pseudo-scandal during the Obama administration. The reasons they are trying to create another in their decades of political pseudo-scandals is that she is a, Black, b. a Woman, c. a Democrat.   Republicans made a decision after the passage of the monumental Civil Rights legislation that they would take in those racists and segregationists who would never accept the equality of Black poeple, and others covered by those acts and that tactic, the "Southern Strategy" of the criminal Richard Nixon has served to empower Republicans and to move them from main-stream, still quasi-democratic corporate servants ever farther into the Republican Party turning from corporatism to fascism.  From the party of inequality to fascism.  From the party of Eisenhower to the party of Nixon - Trump.  All Republicans in 2017 are the servants of racism, of fascism, today.  They went from trying to harness racism and racists as a tool to those racists dominating and entirely running the party and, with them in power, the United States.   Susan Rice did nothing illegal, improper or unethical.  After they've dragged her through dozens of more hearings in addition to all of those the Republican-fascists subjected her and other members of the Obama administration to, they will come up with nothing.  I am confident of that because they came up with nothing on anything else.  If there has been something dependable in the Obama administration, it is that the people who were in it don't commit crimes because they intentionally choose to not break the law.  You can say the exact opposite about Republicans.    And you can be certain that just as the criminal Bill O'Reilly went after Congresswoman Maxine Waters, the reason they've gone after Susan Rice it is to rally their racist, fascist base.   Update:  Just looked, you can add the genteel, Ivy Leaguer, Federalist fascists to that list, the ones who sponored Neil Gorsuch's nomination to the Supreme Court.  They're supporting the Trump traitors cover up on their website, too.  Even More Hate Mail - "The Greatest Minds In Science ....." (blah, blah, blah) I don't know.  If we're supposed to believe that this multi-universe option to solve the atheist-materialist conundrum caused by modern physics is legitimate, on the basis of the equations they come up with to determine all possible positions of electrons, what would the equations dealing with all possible aspects of my typing out that "M" at the beginning of that sentence in my question below be like?   I would suspect that, while it would be impossible to calculate something like that, it would have to come up with far more possible values for all of the myriads of variables in that one physical act (never mind intentional choice) than those dealing with an electron.  All of them requiring a universe in which all of those possible variations in the variables would be expressed in a concrete material form.   And you think the Genesis story* - even taken as a figurative, instructive description, not as young earth creationist "science" - is absurd. At least I'd like a really good explanation that all of the many physicists who insist on that multi-universe agree on as to why my question on that matter isn't relevant.   Or, maybe, to get around the problem of my experienced choice in typing that "M" being determinative, they'll claim that I didn't really choose that but that it merely exists in their stupendously infinite and concrete mega-supreme multi-universe system which, by the way, we don't even know exists except within their imaginations - none of them really being able to fathom what they've imagined, I'd guess no two physicists imagining exactly the same ensemble.  In which case where did all of those infinite universes come from and why they exist as they do?  And, yes, there is the problem that if the equations that our multi-universe creating physicists dream up are so efficacious as to either create or expose other universes, there would certainly have to be other universes in which other physicists have come up with the disconfirmation of their conjectures.  I think the most parsimonious explanation is that those other universes exist as nothing but wishful thinking by atheists who have proved how decadent they are willing to make science in order to impose their atheism on science.  That has happened in cosmology, in neuro-science, in biology, and certainly in the social sciences over and over and over again since at least the advent of modern science.  There is no magisterial wall between atheist ideology and science as there is, in fact, one that keeps religion out.  Neither religion founded in the supernatural nor atheism based on vulgar materialism really have any place in science, though I'm afraid a lot of people who get paid to write papers and publish them would be hard put to come up with something to get into the journals and to show their departments to try to get tenure if such a wall against atheist religious preference was enforced.  They would be hard put to come up with something to do that with if atheist-materialist faith were kept out of science.   I think the current faculties of the social sciences, neuro-sciences and cosmology would largely have to be put out on the dole if that were done. Hey, maybe firing them all would tip the political balance in favor of liberalism, assuming they haven't talked themselves out of ideas like equality, equal justice, and our equal moral obligation to give them to everyone wasn't obliterated in their materialist wishful thinking due to their atheist faith in natural selection.  I think Horowitz and Putin are good examples of what so often comes out at the end of atheist materialism.  Vulgar materialism and the monumental greed and depravity that constitute its morals and sacraments.  *  I'm really tired of rehashing the first few chapters of Genesis along with the story of the flood.  I think the story of Joseph at the end of the book is far more worth going over.  Especially considering the points Brueggemann makes about him acting as an agent of Pharaoh instead of according to the Jewish tradition is both a lot more important for us today and far more interesting.  Especially as Brueggemann point out, you've seen one Pharaoh you've seen them all and a lot of them are around today.  Materialism Is Entirely A Product of The Primitive Emotional Preferences Of Those Engaged In It - Hate Mail I really wish that instead of railing at me and other commentators you would go read Adam Frank's excellent article.  Read the very clear, very concise description of the alternatives of taking either the position that the wave function, itself, is a picture of reality or the one we happen to measure being what it would seem to be, the measure's perspective on reality and not an objective reality itself.  The first one, the stubborn insistence that what is obviously not a concrete view of reality comes with a huge price, as Frank says, it insists on the really crazy results of the ever expanding multi-universes that turns the "turtles all the way down" line atheists love to use against religious folks on its head and turns equations into creator gods on the say so of contemporary physicists.  I have always wondered how the tiny little actions we take generating new universes would square with the materialists' ideas of force and power needed to do things could be fit together but if there's something like that within the theories, I've never come across it.  It would seem to be that we, like the young witches in Harry Potter, are unconsciously doing far more impressive magic.  All the time, Every one of my keystroke while I write this have, I suppose, created universes in which someone doing exactly what I did typed every single other available wrong letter or character or left out every one of them, or something like that.  Try describing every alternative to even one tiny act you take and imagine what alternative universes would be generated to constitute every possible variation on it. The multiverse theory that is so influential in pop culture and in university departments would seem to turn us all into unconscious creator gods far more potent in their creation than God in Genesis.  Only, the materialists can comfort themselves that we're not creating by intention but through the power of their equations.  I wonder if there are universes in those all other possible universes where physicists come up with equally potent equations that cancel out the power of our physicists, disproving the multi-universe conjecture.  If our equations can have such power, why not theirs?* Of course the other view, that the the many (infinite?) possibilities of the wave function collapsing into some kind of imposed reality when someone makes a measurement also has consequences of the people measuring influencing the actual physical reality.  Or at least what we choose that to be. From what I understand you can either choose to have our and everyone elses' every act generating entire new universes continually or you can choose to say that whatever reality we can have of the physical universe being, in a real way, the result of our our minds, our decisions, and choices but NOT the classical idea of it being an "objective" "real" view of a hard material reality.   I think the decision of which one is chosen is, as Frank says, entirely dependent on which one you like and really not on anything else.  I've read physicists, mathematicians, and others involved in these issues note that which one is chosen probably has as much to do with the geographical location of where you went to grad school and which view dominated which department granted you your degree and which also gave you a professional and, likely, financial stake in your chosen denomination of physics.  Which is just another undermining of the classical-materialist belief in a solid, scientifically certain, material reality which comes with this. For the attempt to turn the mind into another material object, governed by whatever forces under whatever laws whichever ideological scientist might prefer the problem is far more basic.  And here I think Frank put it very well. Putting the perceiving subject back into physics would seem to undermine the whole materialist perspective. A theory of mind that depends on matter that depends on mind could not yield the solid ground so many materialists yearn for. And, for the atheists in the audience, don't pretend that both of the choices available to materialists, either the elevation of the wave function into an ever expanding, multi-universe reality or the one that admits that, for human purposes, our own choices govern what we will ever have as reality, whatever we can coherently or incoherently talk about matter being, BOTH OF THOSE ALTERNATIVES ARE dependent on our minds, our choices.  There is no old fashioned, comforting, classical view of a solid, dependable universe available to you if you, as you also will, insist that the reality of the material universe is uniform at every scale of its existence. The price for pretending otherwise is that ultimate decadence of pretending that our minds are an illusion which not only collapses the variables in an equation those meaningless minds come up with, it collapses the possibility that any of it has significance and produces any knowledge of any truth or reality.   The mind only become a "hard problem" when atheists want to force our minds out of what they are and into a narrow, classical physical world that died in the early 20th century.  It is in every way irrational, every way anachronistic, in every way as much a product of the emotional preferences of the materialists who are engaged in that because they just can't stand anything that implies that our intuitions derived from the reality of our minds not being like physical things implies that there is a God. God ain't going away, dear, there is not a coffin big enough and no lid strong enough.  People will not ignore the problems with materialism as long as they think and the materialist hegemony that clogs universities and the media isn't going to stop that.  It can't even stop people believing in the stuff CSICOP railed against,  there are probably more people who believe in those things than there were in the mid 1970s.   It's materialism, in the high-brow form, that's in trouble, though, as we can see from the product of indoctrination into hard-core materialism in the old Soviet Union and China, the vulgar form of it and the fascism that comes with that are flourishing.   Materialism always seems to collapse into that.  Look at David Horowitz.  That high-brow materialist-atheist view of life is merely a matter of snobbery by people with degrees or those who would like to be taken as such without the work.  It is a product of social and class coercion and vulgar economic-social aspiration as much as anything else that people figure will get them ahead. *  Maybe a finger typed out every other possible "M" at the beginning of this sentence in a slightly different place on the key, and in some of them they forgot to capitalize it, or maybe they don't capitalize at the beginning of sentences in some of those universes.  Maybe they choose different type faces.... It almost immediately becomes absurd. Hate Update:   What's an ignorant, middle-brow, blog babbler like you to do?  The Humanities Major Atheist Solution Though never a whiz with the math, And so on to science, no path, I'll fake it with attitude, Materialist platitude,  And pseudo-historical wrath.  Wednesday, April 5, 2017 Jay Semko - Mouse in a Hole - Answer To An Angry Rant I am really, really not big on rock.  Once in a while I hear something I think is worth while.  As far as I can remember this is the last thing like that I heard, it was posted eleven years back so it's since then I've heard it. Makes me wonder how many mass gun murders there have been in North America since it was written. Simps and Trump are alike re truth telling,  As well in their ranting and yelling, They both come from Queens, And they both act like tweens  And oddly alike re repelling.   Update:  Like there's a difference.  You all come from the same place and it's anatomical, not geographical.  And speaking of meat-heads.  He's still going on and on about my lying about not knowing about a band I never heard of or heard, going on three days .... or is it four?   I don't know, is he still doing so on his vacation to Stockholm?  As I said, if I'd spent all that money to go on a vacation I can tell you, answering the numb nuts at Eschaton wouldn't be how I'd be spending my time.  I'm guessing his girl friend was off enjoying herself while she left him to drool into his phone.  I hope so, it must be bad enough to be traveling with him.  An ancient rock critic in Sweden, Just couldn't stop blogging or tweetin' His raging, OCD  Attention grabs, needy Would come even while parked in Eden.  (I'll confess, this one took seven minutes, not the under five I limit myself to when limericking.) Update:  "in bed while using hand held devices". Oh, please, I don't ever want to think of you in bed using a "hand held device". Materialists Are The Atheist Equivalent of Young Earth Creationists Reader, rustypickup sent me a link to an interesting article by the University of Rochester astronomer, Adam Frank about the persistent problems for the materialist model of reality in face of the persisting problems of modern physics.   Much of what he said will be familiar to people who've read my blog posts on the relevant issues, as good a way of any to show what there problems which Frank says physicists don't like to talk about, especially with outsiders, is to give a few quotes. When I was a young physics student I once asked a professor: ‘What’s an electron?’ His answer stunned me. ‘An electron,’ he said, ‘is that to which we attribute the properties of the electron.’ That vague, circular response was a long way from the dream that drove me into physics, a dream of theories that perfectly described reality. Like almost every student over the past 100 years, I was shocked by quantum mechanics, the physics of the micro-world. In place of a clear vision of little bits of matter that explain all the big things around us, quantum physics gives us a powerful yet seemly paradoxical calculus. With its emphasis on probability waves, essential uncertainties and experimenters disturbing the reality they seek to measure, quantum mechanics made imagining the stuff of the world as classical bits of matter (or miniature billiard balls) all but impossible. Like most physicists, I learned how to ignore the weirdness of quantum physics. ‘Shut up and calculate!’ (the dictum of the American physicist David Mermin) works fine if you are trying to get 100 per cent on your Advanced Quantum Theory homework or building a laser. But behind quantum mechanics’ unequaled calculational precision lie profound, stubbornly persistent questions about what those quantum rules imply about the nature of reality – including our place in it. Those questions are well-known in the physics community, but perhaps our habit of shutting up has been a little too successful. A century of agnosticism about the true nature of matter hasn’t found its way deeply enough into other fields, where materialism still appears to be the most sensible way of dealing with the world and, most of all, with the mind. Some neuroscientists think that they’re being precise and grounded by holding tightly to materialist credentials. Molecular biologists, geneticists, and many other types of researchers – as well as the nonscientist public – have been similarly drawn to materialism’s seeming finality. But this conviction is out of step with what we physicists know about the material world – or rather, what we don’t know. Sorry, that's one of the problems with doing these topics.  The ideas involved don't lend themselves to being disposed of in a few aphoristic statements designed for easy consumption by even college-educated TV trained consumer of them.   And as Adam Frank notes, it's apparent that some of the biggest names in science, even the physicists whose own field can't escape the vicissitudes of these hard and inconvenient truths don't seem to be willing to really acknowledge that they are there, they are real and the insurmountable hurdle  they present for their materialist-religious ideology are there.  And for materialism, since its replacement for God is the physical universe and the laws constructed by science about those, those insurmountable problems are fatal to materialism in a way that they are not for non-materialist religion. Take the very first problem in that, that the real and effective modern understanding of electrons doesn't actually define WHAT they are, they present them in terms of the properties that physicists have assigned to them.  When they are talking about electrons, they aren't talking about a thing they're talking about what they believe an electron does.  And that problem is only more exacerbated by the fact that modern physics doesn't talk about an actual thing doing things, it can't do anything but present those "things" as a series of probabilities, none of which can be definitely assigned to the "thing" they're talking about. For physicists, the ambiguity over matter boils down to what we call the measurement problem, and its relationship to an entity known as the wave function. Back in the good old days of Newtonian physics, the behaviour of particles was determined by a straightforward mathematical law that reads F = ma. You applied a force F to a particle of mass m, and the particle moved with acceleration a. It was easy to picture this in your head. Particle? Check. Force? Check. Acceleration? Yup. Off you go. The equation F = ma gave you two things that matter most to the Newtonian picture of the world: a particle’s location and its velocity. This is what physicists call a particle’s state. Newton’s laws gave you the particle’s state for any time and to any precision you need. If the state of every particle is described by such a simple equation, and if large systems are just big combinations of particles, then the whole world should behave in a fully predictable way. Many materialists still carry the baggage of that old classical picture. It’s why physics is still widely regarded as the ultimate source of answers to questions about the world, both outside and inside our heads. In Isaac Newton’s physics, position and velocity were indeed clearly defined and clearly imagined properties of a particle. Measurements of the particle’s state changed nothing in principle. The equation F = ma was true whether you were looking at the particle or not. All of that fell apart as scientists began probing at the scale of atoms early last century. In a burst of creativity, physicists devised a new set of rules known as quantum mechanics. A critical piece of the new physics was embodied in Schrödinger’s equation. Like Newton’s F = ma, the Schrödinger equation represents mathematical machinery for doing physics; it describes how the state of a particle is changing. But to account for all the new phenomena physicists were finding (ones Newton knew nothing about), the Austrian physicist Erwin Schrödinger had to formulate a very different kind of equation. When calculations are done with the Schrödinger equation, what’s left is not the Newtonian state of exact position and velocity. Instead, you get what is called the wave function (physicists refer to it as psi after the Greek symbol Ψ used to denote it). Unlike the Newtonian state, which can be clearly imagined in a commonsense way, the wave function is an epistemological and ontological mess. The wave function does not give you a specific measurement of location and velocity for a particle; it gives you only probabilities at the root level of reality. Psi appears to tell you that, at any moment, the particle has many positions and many velocities. In effect, the bits of matter from Newtonian physics are smeared out into sets of potentials or possibilities. All of this led some of the most philosophically sophisticated physicists such as Eddington to state and even some of the most dedicated atheist-materialist philosophers such as Bertrand Russell to admit that modern physics had pretty much destroyed the old materialism which was fast becoming the standard religion of atheist-scientists.  It is the old-time religion that still holds sway among even physicists whose own field undermined the possibility of them believing in it professionally. Materialist - ideologues among modern physicists would seem, to me, to be the equivalent of biologists or geologists who held with young-earth creationism.   Frank goes into some of the truly bizarre stuff like even the most extreme versions of multi-verse theory which is found to be more acceptable to these materialists than just stating the obvious, that materialism is an inadequate model of the physical universe at even the level they can study with any kind of confidence.  Never mind the insistence that the efficacy of their religious cosmology can be extended from quantum physics concerning electrons into complex chemistry, biology and even reliably reducing the minds that can't really grasp what electrons are or where they are with absolute certainty into computers made of meat. I have noted how, on reading Eddington's lectures that Bertrand Russell predicted that science was entering into a decadent phase.  That was nearly ninety years ago.  What has obviously reached a stage of decay is the fields of cosmology, some branches of physics, the elusive and absurd and blatantly announced attempts of scientists to define consciousness, mental activity, etc. into nothing but chemistry and physics, something that some of the big names in popular science such as Michio Kaku claim is inevitable - Frank has critiqued him on that count, as well. Everywhere I've looked, when a scientist starts out with the clear intention, generally overtly stated, to support the materialist model of reality, the results have been utterly decadent, utterly dishonest and the science it produces has often crashed catestrophically.   In a lot of my blog writing I have dealt with the most deadly of those, eugenics, which the Darwin inner circle in Britain and Germany believed in as a part of the "material monist" view of life which they believed natural selection confirmed.  I have noted a number of times that in his book, The History of Creation, which had Darwin's full support, Haeckel credited him and that theory with the "final triumph" of "material monism" to Darwin.  The current crop of materialists in cosmology, in neuro-science and other fields which are trying to nail the final nails in the coffin of what they take as the main rival of their religious faith, a belief in God, are engaged in the same entirely extra-scientific effort. Rustypickup sent me the link to the piece in my piece noting Stephen Jay Gould's willingness to forego his entire, decades long criticism of the Just-so story telling of sociobiology and evo-psy in the one instance where they disposed of the "vexing problem" of "altruism" on behalf of natural selection.  I can't claim that my longstanding affection for Gould didn't take a big hit with that article and, especially, that particular lapse in his scientific integrity.  And for the same reason, ultimately, that led the original generation of Darwinists to refuse to note the problems with their universal ambitions for the theory of natural selection.  That someone as humane as Gould was willing to do that in the 1970s, two years after he noted that sociobiology risked a revival of eugenics and all of the possibilities we in the post-war period know with such horrible certainty shows how dangerous materialism really is. It's mighty tempting to go into all of the problems of materialism that Adam Frank sets down in his article but I'll give you his conclusion, which I can agree with, sort of: Kick at the rock, Sam Johnson, break your bones:  But cloudy, cloudy is the stuff of stones. Oh, well, I can't keep myself from noting that if the mind is not material then there is no way to apply any of the methods of physics or even biology to it because those can only have any reliability at all in so far as they reveal properties of matter.  If the mind is not material, it will certainly have qualities which the methods of science can't approach.  To anyone who objects to that idea, modern physics has certainly shown to a high degree of reliability that the methods of science can't even reach all of the qualities of the physical universe.  As I will brag, once again,  I once got the arch-materialist, arrogant physicist Sean Carroll to admit that there was not a single object in the entire physical universe that physics had defined comprehensively and exhaustively.  Something about even the most studied object in the universe, "simple" and observable as that might be, has eluded the most exigent methods of science. Susan Rice didn't ask the FBI for immunity from prosecution. Mike Flynn did. Obama's campaign was not under FBI investigation. Trump's is. Republicans Are The Party of White Supremacy That Is Why They Are Going After Susan Rice Again Rule of Republican-fascism: when you can do it, use a Black Woman as a focus in your political witch hunts do it and if it is a Black Woman you've used that way before, so much the better.  The racist, neo-confederate, white-supremacy party is going after Susan Rice again AND THEY'RE GOING AFTER HER, THE FORMER NATIONAL SECURITY ADVISOR TO THE PRESIDENT FOR DOING HER JOB. ' The mentally defective Pauls, Rand and Big Daddy Ron in his podcast to his pod-people are hardly alone in this Republican effort.  It began in the Trump Whitehouse with, we know,  National Security-political operatives Ezra Cohen-Watnick and  Michael Ellis doing what Susan Rice is being accused of by Republicans, leaking classified information to Devin Nunes in what was an obvious attempt to throw up smoke screen around Trump regime treason with Russia.  The whole thing is, obviously, planned and coordinated to use Susan Rice to rally the racists who are the main support of the Republican-fascist party. They went after Rice before as part of their phony-scandal over the Benghazi attack, they will, no doubt, drag her through the mud before she is ultimately exonerated but exonerated only by the very few who were paying real attention to it and the few of those who are still paying attention. The Republican Party made the decision to appeal directly to segregationists and racists in the 1960s it is now in the total control of racists and overt white supremacists.  Male and female, Republicans in the Congress eagerly went after Black Women with glee during Republican campaign of racism against the first Black president in our history.  FOX, Breitbart, the toilet papers, etc. were all part of it.  I wouldn't be surprised if they aren't taking lessons in it from Donald Trump's patron Vladimir Putin who is funding and promoting neo-Nazis and neo-fascists in Europe and here. This is serious, this is proto-Nazi behavior, this is unacceptable.  This has to be thrown in the face of the few phony-moderates and used to bring them down and get their party out of power. Tuesday, April 4, 2017 Do Authorities Now Have What They Need on Trump? Charles Ives - Religion I have gotten a lot of flack for the 20th-21st century music I post, Dusan Bogdanovic was slammed for being too ethnic (meaning that as a composer who grew up in the Balkans they figured he shouldn't make reference to Balkan music) or my dear old, lamented and beloved friend Arthur Berger was denounced as an"academic serialist" by someone who neither knew what the term allegedly meant and who was unfamiliar with Arthur's music - most of which was quite tonal - anyway. But one composer, the most radical of them all, perhaps, Charles Ives was someone I could post, anything from his most sentimental parlor song to his most extremely proto-serial work without getting any flack because sometime in the 1970s or so Charles Ives was declared kewl - something he also never was.  I would bet he'd have seriously objected to the idea. Anyway, looking up one of his most perfect pieces in any form, "The Housatonic At Stockbridge," I noticed that the next song in Ives' self-published book 114 Songs has a text which could not have possibly been more apropos to several of my recent posts on atheism, The song is "Religion" (page 37 of the Pdf below).   The text of the song, from James T. Bixby's essay "Modern Dogmatism"  reads “There is no unbelief; And day by day, and night by night, unconsciously The heart lives by that faith the lips deny,— God knows the why.” is just a far earlier statement of my observation that none of the atheist-materialists who never, not for a second live their lives as if they really believed their claims that people are objects, "lumbering robots" "computers made of meat" whose consciousness is a delusion based in the mere working out of random chemical-physical combinations and fluctuations in our skulls, free-will, free-choice (all compositional choices included, not to mention their own their academic blather) and the rights and privileges enjoyed by human beings, etc.  None of them really live that way.  Apparently Mr. Bixby said the same thing in a different way in 1891 and the great Charles Ives chose to set those words 29 years later because he thought they were right. Here is the song sung. David Pittsinger, voice Douglas Dickson, piano Here's a wider context for the text set by Ives She is right who sings:— “There is no unbelief; And day by day, and night by night, unconsciously The heart lives by that faith the lips deny,— God knows the why.” I wish everything could get looked up that fast.  It would have taken me at least an hour to look this stuff up, assuming the university library where I would have to go look for it had it.  I wish it were possible to look up everything online. Charles Ives - The Housatonic at Stockbridge Jan DeGaetani, voice Gilbert Kalish, piano Score (Page 32 of the Pdf) “I think he is an idiot and forgot who I am" That would be [Male -1] the attention-starved, as-seen-on TV goof, Carter Page, the callow numbskull with a PhD that so impressed Donald Trump that he named him as a campaign consultant, only to have other members of the Trump campaign deny he had any role in the campaign.  Or something like that.  Like everything to do with Page, it leaves you shaking your head as to how someone who is so obviously limited and so obviously immature could have graduated from the Naval Academy and to have obtained graduate degrees.  I looked at the Wikipedia page of The School of Oriental and African Studies which gave him that PhD.  He doesn't seem to appear in the long list of grad that the school's PR people who obviously wrote the Wiki wanted to brag on.  At least as of this morning. I think that the Russian spy got it right,  Carter Page is an idiot.   He's such an attention starved idiot that he confirmed that he was the dolt who got unwittingly recruited to be a Russian spy asset by Russian spies who played him on his conceit, his idiocy and his total cluelessness.  It's nice to know that you can agree with Russian spies on one thing, at least. Considering the news of the morning carries the bad news that Mike Pence is meeting with Republican-fascists in the Congress to try to revive their Kill the Poor replacement for Obamacare, I don't know whether or not the damage that the overt gangsters of the Trump regime will do enough damage to hope that the Pence presidency will replace it.  Pence is quite capable of doing a hell of a lot of damage and you can be certain that the media will present him as a savior and enable him and the thugs in the Congress in doing what Trump and his thugs can't do. I say we should make Pence and Ryan and, really, all of the Republicans in the line of succession for the presidency as controversial as possible, they have lots of really awful stuff in their public life to do that with.  I'm not sure Trump-Kushner will be around long enough to depend on them bringing down Republican-fascism. Another thing that's obviously needed, the pardoning powers of the president need to be reigned in.  I doubt the Trump regime would be anywhere near as bold if it were impossible for Trump or Pence to pardon them as their crimes in office and elsewhere are exposed.  I am certain that if Trump doesn't issue the kinds of pardons that previous Republicans have, to end investigations and to protect their own criminality, Pence will.  And if not Pence then Ryan or whoever succeeds them.   There are some crimes that should not be shielded from prosecution, those which happen at the highest levels of power. Especially by members of the same criminal administrations and their equally corrupt political party. Hyacinth Bucket and Zaphod Beeblebronx It's a real but minor source of amusement to me to see Freki trying to pull the class-based Brit tactic of brazening it out by putting on a superior attitude and a pose of looking down her nose.  Like that's ever worked on the Irish.  I think that's one of the things she really hates about me, that when I knew she didn't know what she was talking about or lying that she couldn't cow me with that stuff.   It's really a lot like what Simps does, only he does it from the alleged superiority that having been born in Queens and spending his time in the most over-rated city on the continent so many of NYC's residents believe they have.  Especially when they're convinced they're the definition of teh kewl.  Like that's ever worked with people who aren't that impressed with that, either.   I don't know which of the two is the stupider but they're only risking making me waste time because it's funny watching them make fools of themselves.  Two different styles of snobbery, one effect.  Monday, April 3, 2017 Trump is Panicking About Russia Bernie Sanders Cut It Out, 2016 Is Over Put On Your Big Boy Pants Charlie Pierce has one of the too few adult pieces I've seen on how, despite the continued idiocy of the Susan Sarandon play group on the play left, adults who want to oppose Republican-fascism have better things to do than to re-litigate the last election.  Bernie Sanders, with whom I was more than willing to let bygones be bygones with isn't helping much but we really don't have time to waste on that.   For now, it's a matter of getting Democrats elected on the federal and state level and those campaigns aren't a re-run of the 2017 primaries or general election.  Purity tests and futile primary challenges dreamed of in liberal enclaves in other states by people without a clue are not going to be useful.  Speaking of primaries, this is the time to get rid of the stinking, anti-democratic caucuses in favor of primaries and to take the primaries out of the hands of Republican legislatures and put them in the hands of the Democratic Party.  I proposed that all of them, everywhere, be made Washington State style mail-in ballots as a means of bypassing the states, the crooked voting machines, the stinking anti-democratic caucuses and to get high volume turnouts everywhere so there will be less of a chance that no one knows who the majority of Democrats support and none of the crap that happened at caucuses, especially states with really cockamamie systems like the one in Nevada.  The Democratic Party owns its nomination, not the state governments, too many of which are in control of ratfucking Republican-fascists, and not the federal government.  Even if it cost a lot to run, it would be worth owning our own nomination race.  It would also get the choice out of the hands of Iowa and New Hampshire.   It's way past time that those two lily-white states gave up that influence in a party that depends, absolutely, on a far more diverse electorate.   You Can Tell When An Eschatot Is Lying Because Their Fingers Are Moving - Hate Mail I don't recall even knowing who Gerry Devine and the Hi-Beams are, not to mention knowing any of their music.  I don't think I ever heard of that song, I've got a pretty good memory for even third-rate music so I think I'd remember pretty much everything dopey brings up.   It says online  He first achieved notice with a 1989 New York Music Award for Best New Songwriter.   By 1989 it had been well over a decade since I had managed to pretty much avoid hearing pop music of any kind, not to mention being familiar enough with any one person or band to know much about them.  Give or take one of the really big ones like Prince or one whose politics brought them to my notice, like Lady Ga Ga. Disco was the last straw for me.  I stopped going to bars over disco.  That and realizing I didn't really enjoy them.   So, yeah, like everything else he says, he's lying.  I see that meathead is supposedly posting comments from Sweden?  Jeesh, going to Sweden and spending your days posting comments to Baby Blue?  It's the intellectual equivalent of going to Florence or Paris and looking for a McDonald's.  Update:  The Old Ugly American took time out of his Swedish vacation to come here to rage in his senectitude about this.   I wonder how many dollars of his vacation time it's taking for him to do that.  What an idiot.  Update 2:  The wack-job is still ranting at me from Stockholm for going on three hours. Geesh, he should have stayed home, he could have done that from the comfort of his own play pen for free.   I never friggin' heard of (wait, I've got to scroll up, I can't remember the band name) Gerry Devine and his White Sox or whatever they're called.  I never said anything about their song because I never heard of them before.   I don't know, is Gerry Devine like Divine's little brother or something?  All I can think of is a 300lb female impersonator.  Which is unfortunate because now I've got the theme song for Female Trouble going through my mind.  The curse of a good musical memory.  Sometimes.  Update 3:  He's still railing at me from Sweden, I'd love to know the per minute cost in vacation dollars he's spending on insisting that I'm lying about not knowing about some pop-group I've never heard of before.   Dopey, when you get home BG should get you to a geriatric psychiatrist to get you checked out for dementia.  Or maybe you need your meds adjusted.   Update 4:  So far I'm counting six ranting e-mails from c. 60 degrees North and across the ocean and a couple of seas going on and on about an obscure pop-music act which I've never heard of before.   I think I'll have them bronzed.   Baby boots to geezer pouts.   The last one contains his ultimate insult, that I'm "a hick".  Well, this hick knows one thing, if I were in Stockholm I wouldn't be spending time in front of a screen screeching at Baby Blue and here blowing smoke.   Though I'd rather be out in the country than in a city.    I hadn't realized it before just now, but there is something really fitting that the day the Senate Judiciary Committee moves the nomination of the truly wicked Neil Gorsuch on to, no doubt, confirmation by the American Babylon that the Republican-fascists are, the day's lectionary readings include the story of Susanna and the two evil judges who falsely accuse her when she won't let them have sex with her.    Evil judges, we've got by the scores and hundreds, a Daniel come to judgement to expose them and defeat them we don't have.
7f37292860a537f1
Many-worlds interpretation From Wikipedia, the free encyclopedia   (Redirected from Many worlds) Jump to navigation Jump to search Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views historical reality as a many-branched tree, wherein every possible quantum outcome is realised.[12] Many-worlds reconciles the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations of quantum physics. In Dublin in 1952 Erwin Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that what the equation that won him a Nobel prize seems to be describing is several different histories, they are "not alternatives but all really happen simultaneously". This is the earliest known reference to the many-worlds.[15][16] Interpreting wavefunction collapse[edit] The unreal/real interpretation[edit] Similarities with the de Broglie–Bohm interpretation[edit] Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of the de Broglie–Bohm mechanics and that of Everett’s many-‘worlds’. In particular, the unreal MW interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch ‘worlds’: The second issue with Bohmian mechanics may at first sight appear rather harmless, but which on a closer look develops considerable destructive power: the issue of empty branches. These are the components of the post-measurement state that do not guide any particles because they do not have the actual configuration q in their support. At first sight, the empty branches do not appear problematic but on the contrary very helpful as they enable the theory to explain unique outcomes of measurements. Also, they seem to explain why there is an effective “collapse of the wavefunction”, as in ordinary quantum mechanics. On a closer view, though, one must admit that these empty branches do not actually disappear. As the wavefunction is taken to describe a really existing field, all their branches really exist and will evolve forever by the Schrödinger dynamics, no matter how many of them will become empty in the course of the evolution. Every branch of the global wavefunction potentially describes a complete world which is, according to Bohm’s ontology, only a possible world that would be the actual world if only it were filled with particles, and which is in every respect identical to a corresponding world in Everett’s theory. Only one branch at a time is occupied by particles, thereby representing the actual world, while all other branches, though really existing as part of a really existing wavefunction, are empty and thus contain some sort of “zombie worlds” with planets, oceans, trees, cities, cars and people who talk like us and behave like us, but who do not actually exist. Now, if the Everettian theory may be accused of ontological extravagance, then Bohmian mechanics could be accused of ontological wastefulness. On top of the ontology of empty branches comes the additional ontology of particle positions that are, on account of the quantum equilibrium hypothesis, forever unknown to the observer. Yet, the actual configuration is never needed for the calculation of the statistical predictions in experimental reality, for these can be obtained by mere wavefunction algebra. From this perspective, Bohmian mechanics may appear as a wasteful and redundant theory. I think it is considerations like these that are the biggest obstacle in the way of a general acceptance of Bohmian mechanics.[32] Frequency-based approaches[edit] Everett (1957) briefly derived the Born rule by showing that the Born rule was the only possible rule, and that its derivation was as justified as the procedure for defining probability in classical mechanics. Everett stopped doing research in theoretical physics shortly after obtaining his Ph.D., but his work on probability has been extended by a number of people. Andrew Gleason (1957) and James Hartle (1965) independently reproduced Everett's work[36] which was later extended.[37][38] These results are closely related to Gleason's theorem, a mathematical result according to which the Born probability measure is the only one on Hilbert space that can be constructed purely from the quantum state vector.[39] Decision theory[edit] A decision-theoretic derivation of the Born rule from Everettarian assumptions, was produced by David Deutsch (1999)[40] and refined by Wallace (2002–2009)[41][42][43][44] and Saunders (2004).[45][46] Some reviews have been positive, although the status of these arguments remains highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes.[47] In the New Scientist article, reviewing their presentation at a September 2007 conference,[48][49] Andy Albrecht, a physicist at the University of California at Davis, is quoted as saying "This work will go down as one of the most important developments in the history of science."[47] The Born rule and the collapse of the wave function have been obtained in the framework of the relative-state formulation of quantum mechanics by Armando V. D. B. Assis. He has proved that the Born rule and the collapse of the wave function follow from a game-theoretical strategy, namely the Nash equilibrium within a von Neumann zero-sum game between nature and observer.[50] Symmetries and invariance[edit] Wojciech H. Zurek (2005)[51] has produced a derivation of the Born rule, where decoherence has replaced Deutsch's informatic assumptions.[52] Lutz Polley (2000) has produced Born rule derivations where the informatic assumptions are replaced by symmetry arguments.[53][54] Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman,[55] proposed a similar approach based on self-locating uncertainty.[56] In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. MWI overview[edit] Schematic illustration of splitting as a result of a repeated measurement. Relative state[edit] Since Everett's original work, there have appeared a number of similar formalisms in the literature. One such is the relative state formulation. It makes two assumptions: first, the wavefunction is not simply a description of the object's state, but that it actually is entirely equivalent to the object, a claim it has in common with some other interpretations. Secondly, observation or measurement has no special laws or mechanics, unlike in the Copenhagen interpretation which considers the wavefunction collapse as a special kind of event which occurs as a result of observation. Instead, measurement in the relative state formulation is the consequence of a configuration change in the memory of an observer described by the same basic wave physics as the object being modeled. Successive measurements with successive splittings Under the many-worlds interpretation, the Schrödinger equation, or relativistic analog, holds all the time everywhere. An observation or measurement is modeled by applying the wave equation to the entire system comprising the observer and the object. One consequence is that every observation can be thought of as causing the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches, or split into many "worlds". Since many observation-like events have happened and are constantly happening, there are an enormous and growing number of simultaneously existing states. If a system is composed of two or more subsystems, the system's state will be a superposition of products of the subsystems' states. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and it is no longer possible to consider them independent of one another. In Everett's terminology each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. Properties of the theory[edit] MWI (or other, broader multiverse considerations) provides a context for the anthropic principle which may provide an explanation for the fine-tuned universe.[58][59] MWI, being a decoherent formulation, is axiomatically more streamlined than the Copenhagen and other collapse interpretations; and thus favoured under certain interpretations of Occam's razor.[60][unreliable source?] Of course there are other decoherent interpretations that also possess this advantage with respect to the collapse interpretations. Comparative properties and possible experimental tests[edit] However, in 1985, David Deutsch published three related thought experiments which could test the theory vs the Copenhagen interpretation.[62] The experiments require macroscopic quantum state preparation and quantum erasure by a hypothetical quantum computer which is currently outside experimental possibility. Since then Lockwood (1989), Vaidman and others have made similar proposals.[61] These proposals also require an advanced technology which is able to place a macroscopic object in a coherent superposition, another task which it is uncertain will ever be possible to perform. Many other controversial ideas have been put forward though, such as a recent claim that cosmological observations could test the theory,[63] and another claim by Rainer Plaga (1997), published in Foundations of Physics, that communication might be possible between worlds.[64] Copenhagen interpretation[edit] The universe decaying to a new vacuum state[edit] Popular Comments[edit] Also, it is a common misconception to think that branches are completely separate. In Everett's formulation, they may in principle quantum interfere (i.e., "merge" instead of "splitting") with each other in the future,[68] although this requires all "memory" of the earlier branching event to be lost, so no observer ever sees two branches of reality.[69][70] MWI response: Everett's treatment of observations / measurements covers both idealised good measurements and the more general bad or approximate cases.[74] Thus it is legitimate to analyse probability in terms of measurement; no circularity is present. We cannot be sure that the universe is a quantum multiverse until we have a theory of everything and, in particular, a successful theory of quantum gravity.[31] If the final theory of everything is non-linear with respect to wavefunctions then many-worlds would be invalid.[1][4][5][6][7] MWI response: Occam's razor actually is a constraint on the complexity of physical theory, not on the number of universes. MWI is a simpler theory since it has fewer postulates.[60][unreliable source?] Occams's razor is often cited by MWI adherents as an advantage of MWI. There is a wide range of claims that are considered "many-worlds" interpretations. It was often claimed by those who do not believe in MWI[82] that Everett himself was not entirely clear[83] as to what he believed; however, MWI adherents (such as DeWitt, Tegmark, Deutsch and others) believe they fully understand Everett's meaning as implying the literal existence of the other worlds. Additionally, recent biographical sources make it clear that Everett believed in the literal reality of the other quantum worlds.[24] Everett's son reported that Hugh Everett "never wavered in his belief over his many-worlds theory".[84] Also Everett was reported to believe "his many-worlds theory guaranteed him immortality".[85] MWI is considered by some[who?] to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[69] claim MWI is directly testable. Everett regarded MWI as falsifiable since any test that falsifies conventional quantum theory would also falsify MWI.[23] Speculative implications[edit] Quantum suicide thought experiment[edit] Weak coupling[edit] A 1991 article by J. Polchinski also supports the view that inter-world communication is a theoretical possibility.[105] Other authors in a 1994 preprint article also contemplated similar ideas.[106] Absurd/ highly improbable timelines[edit] Many MWI proponents assert that every physically possible event has to be represented in the multiversal stack, and by definition this would include highly unlikely scenarios and timelines. Bryce Seligman DeWitt has stated that "Everett/ Wheeler/ Graham do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." [111] Borrowing a phase from T.H. White's The Once and Future King, Murray Gell-Mann describes the implications of his Totalitarian principle, as "Everything not forbidden is compulsory."[112] Max Tegmark has affirmed in numerous statements that absurd/ highly unlikely events are inevitable under the MWI interpretation. To quote Tegmark, "Things inconsistent with the laws of physics will never happen - everything else will... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely".[113] Frank J. Tipler, although a strong advocate for the many-worlds interpretation, has expressed some skepticism regarding this aspect of the theory. In a 2015 interview he stated "We simply don't might be that the modulus over the wavefunction of that possibility [i.e. an extremely absurd yet physically possible event] is zero in which case there is no such world...There are universes out there, which you could imagine which...would not be actualized." [114] Similarity to modal realism[edit] Time travel[edit] Many-worlds in literature and science fiction[edit] Star Trek uses many-worlds in many stories. In the Original Series, Spock and Kirk make a crossover into a mirror universe and encounter versions of themselves from the other universe. In an episode of Star Trek: The Next Generation, Worf crosses over into a parallel universe while piloting a shuttlecraft and manages to encounter several other universes. The TNG finale "All Good Things" uses the concept heavily as Picard jumps between times. This is continued in Star Trek: Deep Space 9 with the episodes arching between the Terran empire and the Alliance, where Sisko and Kira also find mirror versions of themselves and other characters who are currently dead in the central universe, or dead in the parallel universe. Michael Crichton's 1999 novel, Timeline, is about time travel into the past. The technology used in the book is based upon the existence of the MWI's multiverse as described by Everett.[115][116] The author Neal Stephenson drew on the many-worlds theory for some aspects of his 2008 novel Anathem. A more recent iteration, Rick and Morty on the channel Adult Swim, uses the many-worlds interpretation as a basis for the occurrences in the show. The cartoon also makes an allusion to Schrödinger's Cat in an episode in which they split their existence into two hypothetical, equally possible existences. In episode 5 of the Netflix series, Stranger Things, the protagonists' middle school teacher Scott Clarke specifically mentions the many-worlds theory when asked about the possibility of "theoretical" alternate dimensions. Many worlds interpretation was also used in The time Ships by Stephen Baxter.[citation needed] See also[edit] 2. ^ a b Osnaghi, Stefano; Freitas, Fabio; Olival Freire, Jr (2009). "The Origin of the Everettian Heresy" (PDF). Studies in History and Philosophy of Modern Physics. 40 (2): 97–123. Bibcode:2009SHPMP..40...97O. doi:10.1016/j.shpsb.2008.10.002.  4. ^ a b c d e Everett, Hugh (1957). "Relative State Formulation of Quantum Mechanics". Reviews of Modern Physics. 29 (3): 454–462. Bibcode:1957RvMP...29..454E. doi:10.1103/RevModPhys.29.454. Archived from the original on 2011-10-27. Retrieved 2011-10-24.  14. ^ Hugh Everett, Relative State Formulation of Quantum Mechanics Archived 2011-10-27 at the Wayback Machine., Reviews of Modern Physics vol 29, (July 1957) pp 454–462. The claim to resolve EPR is made on page 462 16. ^ 23. ^ a b Everett 26. ^ A response to Bryce DeWitt, Martin Gardner, May 2002 32. ^ Kim Joris Boström (2012). "Combining Bohm and Everett: Axiomatics for a Standalone Quantum Mechanics". arXiv:1208.5632Freely accessible [quant-ph].  34. ^ Kent, Adrian (2010). "One world versus many: The inadequacy of Everettian accounts of evolution, probability, and scientific confirmation". In S. Saunders, J. Barrett, A. Kent and D. Wallace (eds). Many Worlds? Everett, Quantum Theory and Reality. Oxford University Press. arXiv:0905.0624Freely accessible. Bibcode:2009arXiv0905.0624K.  35. ^ Kent, Adrian (1990). "Against Many-Worlds Interpretations". Int. J. Mod. Phys. A. 5 (9): 1745–1762. arXiv:gr-qc/9703089Freely accessible. Bibcode:1990IJMPA...5.1745K. doi:10.1142/S0217751X90000805.  39. ^ Gleason, A. M. (1957). "Measures on the closed subspaces of a Hilbert space". Journal of Mathematics and Mechanics. 6 (4): 885–893. doi:10.1512/iumj.1957.6.56050. MR 0096113.  40. ^ Deutsch, David (1999). "Quantum Theory of Probability and Decisions". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 455 (1988): 3129. arXiv:quant-ph/9906015Freely accessible. Bibcode:1999RSPSA.455.3129D. doi:10.1098/rspa.1999.0443.  41. ^ Wallace, David (2002). "Quantum Probability and Decision Theory, Revisited". arXiv:quant-ph/0211104Freely accessible.  42. ^ Wallace, David (2003). "Everettian Rationality: defending Deutsch's approach to probability in the Everett interpretation". Stud. Hist. Phil. Mod. Phys. 34 (3): 415–438. arXiv:quant-ph/0303050Freely accessible. Bibcode:2003SHPMP..34..415W. doi:10.1016/S1355-2198(03)00036-4.  43. ^ Wallace, David (2003). "Quantum Probability from Subjective Likelihood: Improving on Deutsch's proof of the probability rule". arXiv:quant-ph/0312157Freely accessible.  44. ^ Wallace, David (2009). "A formal proof of the Born rule from decision-theoretic assumptions". arXiv:0906.2718Freely accessible [quant-ph].  45. ^ Saunders, Simon (2004). "Derivation of the Born rule from operational assumptions". Proc. Roy. Soc. Lond. A. 460 (2046): 1771–1788. arXiv:quant-ph/0211138Freely accessible. Bibcode:2004RSPSA.460.1771S. doi:10.1098/rspa.2003.1230.  46. ^ Saunders, Simon (2004). "What is Probability?". Quo Vadis Quantum Mechanics?. The Frontiers Collection. p. 209. arXiv:quant-ph/0412194Freely accessible. doi:10.1007/3-540-26669-0_12. ISBN 3-540-22188-3.  50. ^ Armando V. D. B. Assis (2011). "Assis, Armando V. D. B. On the nature of and the emergence of the Born rule". Annalen der Physik. 523 (11): 883–897. arXiv:1009.1532Freely accessible. Bibcode:2011AnP...523..883A. doi:10.1002/andp.201100062.  51. ^ Zurek, Wojciech H. (2005). "Probabilities from entanglement, Born's rule from envariance". Phys. Rev. A. 71 (5): 052105. arXiv:quant-ph/0405161Freely accessible. Bibcode:2005PhRvA..71e2105Z. doi:10.1103/physreva.71.052105.  52. ^ Schlosshauer, M.; Fine, A. (2005). "On Zurek's derivation of the Born rule". Found. Phys. 35 (2): 197–213. arXiv:quant-ph/0312058Freely accessible. Bibcode:2005FoPh...35..197S. doi:10.1007/s10701-004-1941-6.  53. ^ Polley, L (2001). "Position eigenstates and the Statistical Axiom of Quantum Mechanics". Foundations of Probability and Physics. p. 314. arXiv:quant-ph/0102113Freely accessible. doi:10.1142/9789812810809_0022. ISBN 978-981-02-4846-8.  54. ^ Polley, L (1999). "Quantum-mechanical probability from the symmetries of two-state systems". arXiv:quant-ph/9906124Freely accessible.  56. ^ Sebens, Charles T; Carroll, Sean M (2014). "Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics". arXiv:1405.7577Freely accessible [quant-ph].  60. ^ a b Everett FAQ "Does many-worlds violate Ockham's Razor?" 64. ^ a b c d e f g Plaga, R. (1997). "On a possibility to find experimental evidence for the many-worlds interpretation of quantum mechanics". Foundations of Physics. 27 (4): 559–577. arXiv:quant-ph/9510007Freely accessible. Bibcode:1997FoPh...27..559P. doi:10.1007/BF02550677.  65. ^ Page, Don N. (2000). "Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?". Eighth Canadian conference on general relativity and relativistic astrophysics. 493: 225. arXiv:gr-qc/0001001Freely accessible. Bibcode:1999AIPC..493..225P. doi:10.1063/1.1301589. ISBN 156396905X.  67. ^ Penrose, R. The Road to Reality, §21.11 68. ^ Tegmark, Max (1997). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortschritte der Physik. 46 (6–8): 855–862. arXiv:quant-ph/9709032Freely accessible. doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q. . To quote: "What Everett does NOT postulate: "At certain magic instances, the world undergoes some sort of metaphysical 'split' into two branches that subsequently never interact." This is not only a misrepresentation of the MWI, but also inconsistent with the Everett postulate, since the subsequent time evolution could in principle make the two terms...interfere. According to the MWI, there is, was and always will be only one wavefunction, and only decoherence calculations, not postulates, can tell us when it is a good approximation to treat two terms as non-interacting." 70. ^ Simon, Christoph (2009). "Conscious observers clarify many worlds". arXiv:0908.0322Freely accessible [quant-ph].  73. ^ Arnold Neumaier's comments on the Everett FAQ, 1999 & 2003 76. ^ Stapp, Henry (2002). "The basis problem in many-world theories" (PDF). Canadian Journal of Physics. 80 (9): 1043–1052. arXiv:quant-ph/0110148Freely accessible. Bibcode:2002CaJPh..80.1043S. doi:10.1139/p02-068.  77. ^ Brown, Harvey R; Wallace, David (2005). "Solving the measurement problem: de Broglie–Bohm loses out to Everett" (PDF). Foundations of Physics. 35 (4): 517–540. arXiv:quant-ph/0403094Freely accessible. Bibcode:2005FoPh...35..517B. doi:10.1007/s10701-004-2009-3.  78. ^ Rubin, Mark A (2003). "There is No Basis Ambiguity in Everett Quantum Mechanics". Foundations of Physics Letters. 17 (4): 323–341. arXiv:quant-ph/0310186Freely accessible. Bibcode:2004FoPhL..17..323R. doi:10.1023/B:FOPL.0000035668.37005.e0.  79. ^ Everett FAQ "Does many-worlds violate conservation of energy?" 80. ^ Everett FAQ "How do probabilities emerge within many-worlds?" 81. ^ Everett FAQ "When does Schrodinger's cat split?" 87. ^ Deutsch, David (1985). "Quantum theory, the Church–Turing principle and the universal quantum computer". Proceedings of the Royal Society of London A. 400 (1818): 97–117. Bibcode:1985RSPSA.400...97D. CiteSeerX accessible. doi:10.1098/rspa.1985.0070.  94. ^ Survey Results Archived 2010-11-04 at the Wayback Machine. 102. ^ W.M.Itano et al., Phys. Rev. A47,3354 (1993). 103. ^ M. SargentIII, M. O. Scully and W. E. Lamb, Laser physics (Addison-Wesley, Reading, 1974), p. 27. 104. ^ M.O. Scully and H. Walther, Phys. Rev. A 39,5229 (1989). 105. ^ J. Polchinski, Phys. Rev. Lett. 66,397 (1991). 106. ^ M. Gell-Mann and J. B. Hartle, Equivalent Sets of Histories and Multiple Quasiclassical Domains, preprint University of California at Santa Barbara UCSBTH-94-09 (1994). 107. ^ a b H. D. Zeh, Found.Phys. 3,109 (1973). 108. ^ H. D. Zeh, Phys. Lett. A 172,189 (1993). 109. ^ A. Albrecht, Phys. Rev. D 48, 3768 (1993). 110. ^ D. Deutsch, Int. J. Theor. Phys. 24,1 (1985). 111. ^ DeWitt, B (1970). "Quantum mechanics and reality". Physics Today 23, 9, 30 (1970); doi: 112. ^ Johnson, G. (1999). Strange Beauty: Murray Gell-Mann and the Revolution in Twentieth-Century Physics. Knopf. p. 224. ISBN 978-0-679-43764-2. 113. ^ Max Tegmark: Q and A (Multiverse Philosophy) 114. ^ Audio interview with Frank Tipler- White Gardenia interview with Frank Tipler, December 2015 115. ^ The book erroneously attributes the phrase "many worlds" to Everett. It was actually coined by Bryce DeWitt. 116. ^ Michael Crichton (1 January 2013). Timeline. Random House Publishing Group. pp. 121–123, 127–128. ISBN 978-0-345-53901-4.  Further reading[edit] External links[edit]
aeb28a38e21a7380
Open main menu Wikipedia β Domains of major fields of physics Physics deals with the combination of matter and energy. It also deals with a wide variety of systems, about which theories have been developed that are used by physicists. In general, theories are experimentally tested numerous times before they are accepted as correct as a description of Nature (within a certain domain of validity). For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at much less than the speed of light. These theories continue to be areas of active research: for instance, a remarkable aspect of classical mechanics known as chaos was discovered in the 20th century, three centuries after the original formulation of classical mechanics by Isaac Newton (1642–1727). These "central theories" are important tools for research in more specialized topics, and any physicist, regardless of his or her specialization, is expected to be literate in them. Classical mechanicsEdit Classical mechanics is a model of the physics of forces acting upon bodies; includes sub-fields to describe the behaviours of solids, gases, and fluids. It is often referred to as "Newtonian mechanics" after Isaac Newton and his laws of motion. It also includes classical approach as given by Hamiltonian and Lagrange methods. It deals with motion of particles and general system of particles. There are many branches of classical mechanics, such as: statics, dynamics, kinematics, continuum mechanics (which includes fluid mechanics), statistical mechanics, etc. • Mechanics: branch of physics in which we study about the object and properties of an object in form of motion under the action of force. Thermodynamics and statistical mechanicsEdit The first chapter of The Feynman Lectures on Physics is about the existence of atoms, which Feynman considered to be the most compact statement of physics, from which science could easily result even if all other knowledge was lost.[1] By modeling matter as collections of hard spheres, it is possible to describe the kinetic theory of gases, upon which classical thermodynamics is based. Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems on the macroscopic scale, and the transfer of energy as heat.[2][3] Historically, thermodynamics developed out of the desire to increase the efficiency of early steam engines.[4] The starting point for most thermodynamic considerations is the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work.[5] They also postulate the existence of a quantity named entropy, which can be defined for any system.[6] In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. Electromagnetism and electronicsEdit Maxwell's equations of electromagnetism The special theory of relativity enjoys a relationship with electromagnetism and mechanics; that is, the principle of relativity and the principle of stationary action in mechanics can be used to derive Maxwell's equations,[7][8] and vice versa. The theory of special relativity was proposed in 1905 by Albert Einstein in his article "On the Electrodynamics of Moving Bodies". The title of the article refers to the fact that special relativity resolves an inconsistency between Maxwell's equations and classical mechanics. The theory is based on two postulates: (1) that the mathematical forms of the laws of physics are invariant in all inertial systems; and (2) that the speed of light in a vacuum is constant and independent of the source or observer. Reconciling the two postulates requires a unification of space and time into the frame-dependent concept of spacetime. General relativity is the geometrical theory of gravitation published by Albert Einstein in 1915/16.[9][10] It unifies special relativity, Newton's law of universal gravitation, and the insight that gravitation can be described by the curvature of space and time. In general relativity, the curvature of spacetime is produced by the energy of matter and radiation. Quantum mechanicsEdit Quantum mechanics is the branch of physics treating atomic and subatomic systems and their interaction with radiation. It is based on the observation that all forms of energy are released in discrete units or bundles called "quanta". Remarkably, quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wave functions. The Schrödinger equation plays the role in quantum mechanics that Newton's laws and conservation of energy serve in classical mechanics—i.e., it predicts the future behavior of a dynamic system—and is a wave equation that is used to solve for wavefunctions. For example, the light, or electromagnetic radiation emitted or absorbed by an atom has only certain frequencies (or wavelengths), as can be seen from the line spectrum associated with the chemical element represented by that atom. The quantum theory shows that those frequencies correspond to definite energies of the light quanta, or photons, and result from the fact that the electrons of the atom can have only certain allowed energy values, or levels; when an electron changes from one allowed level to another, a quantum of energy is emitted or absorbed whose frequency is directly proportional to the energy difference between the two levels. The photoelectric effect further confirmed the quantization of light. In 1924, Louis de Broglie proposed that not only do light waves sometimes exhibit particle-like properties, but particles may also exhibit wave-like properties. Two different formulations of quantum mechanics were presented following de Broglie's suggestion. The wave mechanics of Erwin Schrödinger (1926) involves the use of a mathematical entity, the wave function, which is related to the probability of finding a particle at a given point in space. The matrix mechanics of Werner Heisenberg (1925) makes no mention of wave functions or similar concepts but was shown to be mathematically equivalent to Schrödinger's theory. A particularly important discovery of the quantum theory is the uncertainty principle, enunciated by Heisenberg in 1927, which places an absolute theoretical limit on the accuracy of certain measurements; as a result, the assumption by earlier scientists that the physical state of a system could be measured exactly and used to predict future states had to be abandoned. Quantum mechanics was combined with the theory of relativity in the formulation of Paul Dirac. Other developments include quantum statistics, quantum electrodynamics, concerned with interactions between charged particles and electromagnetic fields; and its generalization, quantum field theory. String Theory Also known as theory of everything, this theory combines the theory of general relativity and quantum mechanics to make a single theory. This theory can predict about properties of both small and big objects. This theory is currently under developmental stage. Optics, and atomic, molecular, and optical physicsEdit Optics is the study of light, and the instruments created to use or detect it (i.e. telescopes, spectrometers, etc.). Atomic physics, molecular physics, and optical physics are each individual sub-fields of AMO that study the physical properties of the atom, molecules, and light, respectively. Condensed matter physicsEdit The study of the physical properties of matter in a condensed phase. High energy/particle physics and nuclear physicsEdit Particle physics studies the nature of particles, while nuclear physics studies the atomic nuclei. Cosmology studies how the universe came to be, and its eventual fate. It is studied by physicists and astrophysicists. Interdisciplinary fieldsEdit To the interdisciplinary fields, which define partially sciences of their own, belong e.g. the The table below lists the core theories along with many of the concepts they employ. Theory Major subtopics Concepts Classical mechanics Newton's laws of motion, Lagrangian mechanics, Hamiltonian mechanics, kinematics, statics, dynamics, chaos theory, acoustics, fluid dynamics, continuum mechanics Density, dimension, gravity, space, time, motion, length, position, velocity, acceleration, Galilean invariance, mass, momentum, impulse, force, energy, angular velocity, angular momentum, moment of inertia, torque, conservation law, harmonic oscillator, wave, work, power, Lagrangian, Hamiltonian, Tait–Bryan angles, Euler angles, pneumatic, hydraulic Electromagnetism Electrostatics, electrodynamics, electricity, magnetism, magnetostatics, Maxwell's equations, optics Capacitance, electric charge, current, electrical conductivity, electric field, electric permittivity, electric potential, electrical resistance, electromagnetic field, electromagnetic induction, electromagnetic radiation, Gaussian surface, magnetic field, magnetic flux, magnetic monopole, magnetic permeability Thermodynamics and statistical mechanics Heat engine, kinetic theory Boltzmann's constant, conjugate variables, enthalpy, entropy, equation of state, equipartition theorem, thermodynamic free energy, heat, ideal gas law, internal energy, laws of thermodynamics, Maxwell relations, irreversible process, Ising model, mechanical action, partition function, pressure, reversible process, spontaneous process, state function, statistical ensemble, temperature, thermodynamic equilibrium, thermodynamic potential, thermodynamic processes, thermodynamic state, thermodynamic system, viscosity, volume, work, granular material Quantum mechanics Path integral formulation, scattering theory, Schrödinger equation, quantum field theory, quantum statistical mechanics Adiabatic approximation, black-body radiation, correspondence principle, free particle, Hamiltonian, Hilbert space, identical particles, matrix mechanics, Planck's constant, observer effect, operators, quanta, quantization, quantum entanglement, quantum harmonic oscillator, quantum number, quantum tunneling, Schrödinger's cat, Dirac equation, spin, wave function, wave mechanics, wave–particle duality, zero-point energy, Pauli exclusion principle, Heisenberg uncertainty principle Relativity Special relativity, general relativity, Einstein field equations Covariance, Einstein manifold, equivalence principle, four-momentum, four-vector, general principle of relativity, geodesic motion, gravity, gravitoelectromagnetism, inertial frame of reference, invariance, length contraction, Lorentzian manifold, Lorentz transformation, mass–energy equivalence, metric, Minkowski diagram, Minkowski space, principle of relativity, proper length, proper time, reference frame, rest energy, rest mass, relativity of simultaneity, spacetime, special principle of relativity, speed of light, stress–energy tensor, time dilation, twin paradox, world line 1. ^ Feynman, Richard Phillips; Leighton, Robert Benjamin; Sands, Matthew Linzee (1963). The Feynman Lectures on Physics. p. 1-1. ISBN 0-201-02116-1. . Feynman begins with the atomic hypothesis, as his most compact statement of all scientific knowledge: "If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations ..., what statement would contain the most information in the fewest words? I believe it is ... that all things are made up of atoms – little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. ..." vol. I p. I–2 2. ^ Perot, Pierre (1998). A to Z of Thermodynamics. Oxford University Press. ISBN 0-19-856552-6.  4. ^ Clausius, Rudolf (1850). "LXXIX". On the Motive Power of Heat, and on the Laws which can be deduced from it for the Theory of Heat. Poggendorff's Annalee dere Physic. Dover Reprint. ISBN 0-486-59065-8.  5. ^ Van Ness, H.C. (1969). Understanding Thermodynamics. Dover Publications, Inc. ISBN 0-486-63277-6.  6. ^ Dugdale, J. S. (1998). Entropy and its Physical Meaning. Taylor and Francis. ISBN 0-7484-0569-0.  7. ^ Landau and Lifshitz (1951, 1962), The Classical Theory of Fields, Library of Congress Card Number 62-9181, Chapters 1–4 (3rd edition is ISBN 0-08-016019-0) 8. ^ Corson and Lorrain, Electromagnetic Fields and Waves ISBN 0-7167-1823-5 10. ^ Einstein, Albert (1916). "The Foundation of the General Theory of Relativity". Annalen der Physik. Bibcode:1916AnP...354..769E. doi:10.1002/andp.19163540702. Archived from the original (PDF) on 2006-08-29. Retrieved 2006-09-03.
d778643d87ded1f4
Take the 2-minute tour × I'm reading the Wikipedia page for the Dirac equation: $J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$ with the conservation of probability current and density following from the Schrödinger equation: $\nabla\cdot J + \frac{\partial\rho}{\partial t} = 0.$ The fact that the density is positive definite and convected according to this continuity equation, implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schrödinger expression of the density and current so that the space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schrödinger expression for the current, but must replace by probability density by the symmetrically formed expression $\rho = \frac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*).$ which now becomes the 4th component of a space-time vector, and the entire 4-current density has the relativistically covariant expression $J^\mu = \frac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$ The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite - the initial values of both ψ and ∂tψ may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus we cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time. I am not sure how one gets a new $\rho$ and $J^\mu$. How does one do to derive these two? And can anyone show me why the expression for density not positive definite? share|improve this question any comment...? –  Paul Reubens Oct 7 '12 at 6:41 please see below, hope that helps –  Hal Swyers Oct 7 '12 at 17:17 1 Answer 1 up vote 1 down vote accepted This particular writing of the problem in the article I have always thought was sloppy as well. The most confusing part of the discussion is the statement "The continuity equation is as before". At first one writes the continuity equation as: $$\nabla \cdot J + \dfrac{\partial\rho}{\partial t} = 0$$ Although the del operator can be defined to be infinite dimensional, it is frequently reserved for three dimensions and so the construction of the sentence does not provide a clear interpretation. If you look up conserved current you find the 4-vector version of the continuity equation: $$\partial_\mu j^\mu = 0$$ What is important about the derivation in the wikipedia article is the conversion of the non time dependent density to a time dependent density, or rather: $$\rho = \phi^*\phi$$ $$\rho = \dfrac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*)$$ the intent is clear, the want to make the time component have the same form as the space components. The equation of the current is now: $$J^\mu = \dfrac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$$ which now contains the time component. So the continuity equation that should be used is: $$\partial_\mu J^\mu = 0$$ where the capitalization of $J$ appears to be arbitrary choice in the derivation. One can verify that this is the intent by referring to the article on probability current. From the above I can see that the sudden insertion of the statement that one can arbitrarily pick $$\psi$$ and $$\dfrac{\partial \psi}{\partial t}$$ isn't well explained. This part the article was a source of confusion for me as well until one realized that the author was trying to get to a discussion about the Klein Gordon equation A quick search of web for "probability current and klein gordan equation" finds good links, including a good one from the physics department at UC Davis. If you follow the discussion in the paper you can see it confirms that the argument is really trying to get to a discussion about the Klein Gordon equation and make the connection to probability density. Now, if one does another quick search for "negative solutions to the klein gordan equation" one can find a nice paper from the physics department of the Ohio University. There we get some good discussion around equation 3.13 in the paper which reiterates that, when we redefined the density we introduced some additional variability. So the equation: $$\rho = \dfrac{i\hbar}{2mc^2}(\psi^*\partial_t\psi - \psi\partial_t\psi^*)$$ (where in the orginal, c was set at 1) really is at the root of the problem (confirming the intent in the original article). However, it probably still doesn't satisfy the question, "can anyone show me why the expression for density not positive definite?", but if one goes on a little shopping spree you can find the book Quantum Field Theory Demystified by David McMahon (and there are some free downloads out there, but I won't link to them out of respect for the author), and if you go to pg 116 you will find the discussion: Remembering the free particle solution $$\varphi(\vec{x},t) = e^{-ip\cdot x} = e^{-i(Et- px)}$$ the time derivatives are $$\dfrac{\partial\varphi}{\partial t} = -iEe^{-i(Et- px)}$$ $$\dfrac{\partial\varphi^*}{\partial t} = iEe^{i(Et- px)}$$ We have $$\varphi^*\dfrac{\partial\varphi}{\partial t} = e^{i(Et- px)}[-iEe^{-i(Et- px)}] = -iE$$ $$\varphi\dfrac{\partial\varphi^*}{\partial t} = e^{-i(Et- px)}[iEe^{i(Et- px)}] = iE$$ So the probability density is $$\rho = i(\varphi^*\dfrac{\partial\varphi}{\partial t} - \varphi\dfrac{\partial\varphi^*}{\partial t}) = i(-iE-iE) = 2E$$ Looks good so far-except for those pesky negative energy solutions. Remember that $$E = \pm\sqrt{p^2+m^2}$$ In the case of the negative energy solution $$\rho = 2E =-2\sqrt{p^2+m^2}<0$$ which is a negative probability density, something which simply does not make sense. Hopefully that helps, the notion of a negative probability does not make sense because we define probability on the interval [0,1], so by definition negative probabilities have no meaning. This point is sometimes lost on people when they try to make sense of things, but logically any discussion of negative probabilities is non-sense. This is why QFT ended up reinterpreting the Klein Gordan equation and re purposing it for an equation that governs creation and annihilation operators. share|improve this answer With respect to McMahon's books, please see the cooperative effort to make errata sheets here –  Eduardo Guerras Valera Jan 31 '13 at 16:24 Your Answer
65df7d2d440b77d2
Take the 2-minute tour × This answer of mine has been strongly criticized on the ground that it is no more than a philosophical blabbering. Well, it may well be. But people seem to be of the opinion that HUP alone does not ensure randomness and you need Bell's theorem and other features for the randomness in QM. However, I still believe it is the HUP which is all one needs to appreciate the probabilistic feature of QM. Bell's theorem or other such results reinforces this probabilistic view only. I am very much curious to know the right answer. share|improve this question Asking a separate question instead of abusing the comments is a very good idea! –  Sklivvz Apr 9 '11 at 10:17 @Sklivvz: except that this question is not interested in the past discussion (as per @sb1's comment to my answer) so the whole talk about Bell's theorem (which is just misunderstanding on sb1's part anyway) shouldn't be present in this question at all. –  Marek Apr 9 '11 at 10:29 I'm appalled that so many people piled on your previous Answer without leaving any comments. Qudos to Marek for having left a comment, however the part of his comment that I agree with is that your Answer was not much to the point. It may be that the other downvoters felt that you didn't Answer the Question, not that it was philosophical blathering. I didn't downvote your Answer, but nor did I upvote it. –  Peter Morgan Apr 9 '11 at 13:03 @Peter: No, there were comments exchanged which were not quite friendly. I guess the moderator has removed them all except the first comment. –  user1355 Apr 9 '11 at 13:26 I am totally clueless about the above comment made by @Marek. He seems to assume a lot of things which makes one quite surprised and detested! –  user1355 Apr 9 '11 at 13:30 6 Answers 6 I'm not sure if an undergrads perspective would be useful here- but I'll give it a shot (at worst I'll learn something new.) David Griffith's "Introduction to Quantum Mechanics" takes great care to motivate the uncertainty principle from more basic founding postulates of Q.M. First Hilbert space and the state vector, as the description of the particle, are defined. Next classical observables are formulated as operators on the state vector. Eigenvalues and the basis of the operators are explored and it is revealed that for certain (conjugate) operators, the state vector cannot be written in the same basis if a unique value for those operator's corresponding observable is desired. It is shown that such operators do not commute. It is finally shown that from non-commuting the uncertainty principle can be mathematically derived. So the point of this summary (all of which I'm sure you already know well) is the order that things are done. Griffiths is so far my favorite text book author, and I'm sure there is a reason he laid things out so explicitly. He stresses the classical nature of the observables and how the state vector is truly fundamental. It always seemed to me (and thus how I understand it) that what he was getting at is that observables like position and momentum are classical and what we are doing is trying to perform classical observation on a quantum system. When we attempt to do this, we are putting limitations on the state vector that nature simply doesn't do on her own. The result of this is that we end up with non-comeasurable observables, simply because of our classical bias in "translating" the true state of the particle, which is simply not completely expressible solely in terms of classical observables. To me this, what Q.M. is actually doing, seems more fundamental than the HUP. Perhaps it borders on metaphysics- but it seems to be the logical conclusion of the math/algorithms. And because Bell's Theorem is mentioned: the inputs for this theorem are already there in Q.M.- the theorem simply tells us how to properly combine them and then conclude the character of the correlations between observables. In a way (once again seeming to me) it "measures" what kind of probabilities we are expressing in our theory. share|improve this answer It's true that the uncertainty principle is derived, but what you say in your third paragraph doesn't make much sense. There's not really anything classical about observables. In fact, they act very non-classically since they have nontrivial commutation relations with other things. Observables are operators on the Hilbert space of states, and "project" out (in some sense) the information contained in the state vector you're looking for. The "classical" things are more related to expectation values, not the operators. I think Griffiths discuses this somewhere in the exercises. –  Mr X Apr 10 '11 at 14:48 @Jeremy Price what I was getting at is that things like "momentum" and "position" are not true quantum mechanical properties- rather they are classical measures that we apply to the quantum world. But it is a true interpretation about the expectation values, from what I've read. Which is the root of the uncertainty principle, is it not- as the uncertainty of an observable is expressed as a deviation from the expectation value? (In Griffith's derivation at least) –  user1567 Apr 10 '11 at 15:16 @jaskey13 I don't think it's right to say that about momentum and position. They're very real things quantum mechanically, we still have quantum mechanical analogues of, e.g., conservation of momentum and energy, despite the fact that they are not "well-defined" in a classical sense. In fact, if you look at how to derive the Schroedinger equation, you replace operators into E = p^2/2m and act this on a function, usually as a function of position, which is surely taking all of these properties very seriously and fundamentally! –  Mr X Apr 12 '11 at 16:55 @Jeremy Price Are these analogues those that come from an application of Ehrenfest's theorem? If not- could you please tell me what they are? –  user1567 Apr 12 '11 at 21:48 My edition is surprisingly scant when it comes to conservation laws. Maybe it is time to move on to something more advanced –  user1567 Apr 12 '11 at 22:56 It's very strange for someone to say that "Bell's theorem ensures something in quantum mechanics". Bell's theorem is a theorem - something that can be mathematically proved to hold given the assumptions. It's valid in the same sense as $1+1=2$. Is $1+1=2$ needed for something in physics? Maybe - but the question clearly makes no sense. Mathematics is always valid in physics - and everywhere else. However, even the assumptions of Bell's theorem surely can't be "necessary building blocks" for some results in quantum mechanics because Bell's theorem is not a theorem about quantum mechanics at all. It is a theorem (an inequality) about local realist theories - exactly the kind of theories that quantum mechanics is not. Whether someone needs $1+1=2$ doesn't matter because this fact is imposed upon him, anyway. Any proof may be modified so that $1+1=2$ is needed and any proof may be modified so that $1+1=2$ is not needed. But even if one ignores the comment about "Bell's theorem and other such results" that can't possibly have anything to do with the question, it's nontrivial to make the question precise. The uncertainty principle is normally formulated as a part of quantum mechanics - we say that $\Delta x$ and $\Delta p$ can't have well-defined sharp values at the same moment. What it means for them not to have sharp values? Well, obviously, it means that one measures their values with an error margin, and the fluctuations or choice of the measured value from the allowed distribution has to be random. If it were not random, there would have to be another quantity for which one should do the same discussion. Again, if the uncertainty principle applied to this hidden variables (and a complementary one), it would imply that its values have to be random. Do you allow me to assume that the HUP holds for whatever variables we have? If you do, obviously, there has to be random things in the Universe. But even the term "random" is too ill-defined. Do you require some special vanishing of correlations etc.? If you do, shouldn't you describe what those requirements are? So I don't think it's possible to fully answer vague questions of this kind. I would say a related comment that quantum mechanics - with its random character - is the only mathematically possible and self-consistent framework that is compatible with certain basic observations of the quantum phenomena. The outcomes in quantum mechanics take place randomly, with probabilities and probability distributions that can be calculated from the squared probability amplitudes, and all other attempts to modify the basic framework of quantum mechanics have been ruled out. If it's so, and it is so, there's really no point in trying to decompose the postulates of quantum mechanics into pieces because the pieces only combine into a viable theoretical structure, able to explain the behavior of important worlds such as ours, when all these postulates are taken seriously at the same moment. share|improve this answer In my opinion HUP is not a "principle" but a consequence of the mathematical framework of QM - it is derived rather than "postulated". Randomness or uncertainty in measuring some variable in some state is not strictly related to the uncertainty of its canonically conjugate variable. HUP establishes some limitation on them and that's it. What I want to underline is that, say, uncertainty in momentum is determined with the given QM state itself. About randomness, it is easy to understand if we remember that the information is gathered with help of photons. When the number of photons in one "observation" is large, their average is well determined and it is what the classical physics deals with. If the number of photons is small, the uncertainty makes an impression of a strong randomness in measuring, say, position of a body. Even the Moon position is uncertain if based on few-photon measurements. Uncertainty in measurements is a fundamental feature of states in physics. Determinism is possible only for "well-averaged" measurements. Look at the Ehrenfest equations - they involve average (expectation) values. It implies many-many measurements. In other words, the classical determinism is due to its inclusive character. share|improve this answer Well, you misinterpreted what I (and others) said at least in two important ways. 1. Bell's theorem surely isn't responsible for randomness in QM. That's because it doesn't actually tell you anything about QM itself, only about other theories trying to reproduce the same results that QM (and nature) produces. The reason I mentioned it is that it (severely) restricts the class of non-random theories that can describe the nature. Without such a theorem one might hope (and people still do) that it is possible to construct a deterministic framework that could be compatible with observations. So HUP certainly doesn't imply intrinsic randomness. You need further work to establish that no viable theory (and not just QM) is deterministic. Measurement of violation of Bell's inequalities is what does it (at least if one assumes locality). 2. QM is based on lots of principles. HUP is fundamental (and is built-in by including non-commutative operators into the framework) but no less fundamental than other postulates. Trying to isolate one particular feature of a theory doesn't always make sense. You could try to obtain deterministic QM by removing HUP but that essentially means letting $\hbar \to 0$ and obtaining classical physics, thereby losing all the other special effects of QM. In other words, your statement "HUP which is all one needs to appreciate the probabilistic feature of QM" couldn't be more far removed from reality. To appreciate this probabilistic aspect, one needs to master the mathematical formalism of QM, the way it connects to experiment and the way measurements are interpreted. HUP is only a small part in it and actually, the one thing you almost never care for as it is built-in into the theory from the start. share|improve this answer You have misunderstood my point as well. I am well aware of all the fundamental postulates of QM. You need all those postulates for a fully functional Q.T. However, my point is UP is the postulate for the essential qualitative element of the randomness in the theory. –  user1355 Apr 9 '11 at 8:43 @sb1: that might be the case but you start your question with "people seem to be of the opinion that..." which is simply not the case. People talked about something completely different last time so I am not sure why you bring that in now if you only intend to give downvotes for people's replies. If you only want to talk about pure QM then I suggest you edit your question in order not to confuse people further. –  Marek Apr 9 '11 at 8:49 @sb1 I think there's a Useful Answer in here (my +1). –  Peter Morgan Apr 9 '11 at 13:37 @Peter: thank you. Well, I believe all I said is correct and relevant but whether I've read @sb1's mind correctly as to what his intents were with this question that's another story... –  Marek Apr 9 '11 at 14:14 The title question is Does the HUP alone ensure the randomness of QM? I claim that the answer to this question is:No. The HUP has the basic forms: $$\Delta E.\Delta t \ge \hbar$$ $$\Delta x.\Delta p \ge \hbar$$ Furthermore quantum mechanics books prove that for non-commuting observables: $$[P,Q] \neq 0$$ $$\Delta P.\Delta Q \ge \hbar$$ So the HUP is proven generally as a consequence of the non-commutativity of the observables. In order to understand why there are non-commuting observables in QM takes us to the rest of the postulates of QM and so explains why the other answers say that HUP is a consequence of QM in toto. However there is more to the topic of "QM randomness" than this, and we have not yet responded to your remarks about Bell's Theorem. The first point to note is that in classical engineering, there is a concept of time-domain and frequency domain (for a wave) and the associated law: $$\Delta \omega.\Delta t \ge 1$$ This law is a consequence of the Fourier transform between these domains and therefore: $$e^{-i\omega t}$$ So the HUP formula is more widespread than just quantum mechanics. Of course if one puts $$E=\hbar \omega$$ then one obtains once again! So where does quantum randomness (assuming for the moment, that that is the correct term) come from? One published book that makes this point explicitly is Roger Penrose "The Emperor's New Mind", p297 [In quantum collapse..] these real numbers play a role as actual probabilities for the alternatives in question. Only one of the alternatives survives into the actuality of physical experience.. It is here, and only here, that the non-determinism of quantum theory makes its entry. The italics is mine (and this is where Penrose introduces his R definition for describing quantum wave function reduction). Thus if you are familiar with quantum mechanics then this is the reduction postulate (in words). So we have several different concepts in play here: HUP, QM Postulates, Bell's Theorem, randomness. share|improve this answer I have no confusion about the fundamentals of quantum mechanics more than any body else here. Your engineering example is cute but wrong in the sense that the error in measurement in that case can be made arbitrarily small by more accurate instruments. Any theory, comes with a HUP like principle where the uncertainty can't be made arbitrarily small has to have probabilistic features. That's my understanding. –  user1355 Apr 9 '11 at 16:02 @sb1 : I think the moral of how this site (has to) work is that if a question appears to be asking for a Textbook explanation of something that is what will be provided by default. If one means to challenge, or extend, the textbooks on an apparently basic topic (which Fundamental ones are) then the question formation needs to be referenced and so on. Phrases like "People believe.." dont convey exactly what was intended. So yes there will be misunderstandings about what was intended here. I worked on the Question title itself this time as my source for your meaning. Try again though. –  Roy Simpson Apr 9 '11 at 21:18 @sb1, I think the "engineering example" Roy cites is relevant to this discussion, but I point out that it emerges in deterministic signal processing, that stochastic SP is not needed (which you both may know). I find that a good author on this issue in SP is Leon Cohen, who I think writes very clearly. His "Time-Frequency Distributions-A Review", PROCEEDINGS OF THE IEEE, VOL. 77, NO. 7, JULY 1989, Page 941, where he discusses the relationship between quantum and SP from 30 years experience, is a 40 pages that's well worth reading. –  Peter Morgan Apr 9 '11 at 22:32 @sb1 The error in measurement in the SP case cannot be made arbitrarily small because the concept of measuring the amplitude of the signal at a given frequency requires measurement of the signal at all times, so that a perfect Fourier transform of the signal can be constructed. If we measure the signal for only a finite time, we can effectively only compute the Fourier components of the signal we want in convolution with a window function. –  Peter Morgan Apr 9 '11 at 22:41 @Peter, thanks for the link. Actually this Answer is only part of a larger Answer I had developed for this question, which developed the point about the time-frequency domain (and another example) much further. But when I checked with the OP question I found that my conclusions had nothing much to do with the original question, so I truncated the answer to what the OP seemed to be asking. This Answer is now being downvoted probably because it doesnt address an ambiguous question, so I will probably delete it and not answer any more ambiguous questions of this type. –  Roy Simpson Apr 10 '11 at 16:08 I think I'm largely going to repeat what Roy, Vladimir, and Jaskey13 have already said, but perhaps, I hope, not so totally that this won't be Useful. I take it that HUP, despite its grandiose title, is not a principle; it's derived as a consequence of the various mathematical structures of QM. As such, HUP is a part of a characterization of the properties of QM. HUP is, however, something of a lesser part of that characterization because it is not enough to characterize all the differences between classical stochastic physics and QM. It is possible, as Roy says, to construct local classical models for which, under a reasonable physical interpretation of the mathematics, HUP is true. I'm not completely sure what you mean by "HUP alone does not ensure randomness"? I suppose the interpretation of QM is all probability all the time. In various comments you protest, and I believe, that you know the axioms of QM and their basic interpretation well enough. What I take you to mean is that "HUP alone does not ensure intrinsic randomness". This qualification, which is fairly commonly used, makes sense, to me, of your following comment, with my qualification inserted, that "you need Bell's theorem and other features for the [intrinsic] randomness in QM", whereas the relevance of Bell inequalities to your Question seems to have troubled other people here. I take “intrinsic” to be a rather coded way to say that a classical probability theory is not isomorphic to quantum probability theory. I've previously cited on Physics SE the presentations of Bell-CHSH inequalities that I think best make this clear, due to Landau and to de Muynck, here, where I note that you also left a notably Useful(8) Answer. Their derivations use the CCRs in a way that is not significantly more obscure than does the derivation of the HUP. I take the Bell-CHSH inequalities to be a reasonable lowest-order characterization of the difference. There is of course confusion concerning the relevance of locality to the Bell inequalities, which I think could get in the way of my discussion here, but I see that you have a relatively sophisticated view of that confusion. share|improve this answer UP can be derived from the Schrödinger equation and normally introductory text books derive it. But in advanced courses one learns that Schrödinger equation can be derived from the basic axioms of quantum theory. These axioms are held to be the most fundamental postulates about nature. These postulates directly leads us to the general uncertainty relationship. The catch here, imho, is UP encompasses the gist of the theory. In order to be a quantum theory all a theory needs is to be consistent with the UP. It is truly a fundamental principle of the QT in this sense. –  user1355 Apr 10 '11 at 15:10 -1 is not mine. –  user1355 Apr 10 '11 at 15:14 @sb1 Downvoting was all too likely for my Answer. Downvotes are meaningless unless someone is wise enough to be able to say why, at least for other readers, if not for the Answerer. Your idea that HUP is truly a principle, and enough to make a theory a quantum theory, seems to me quite radical. I think I don't see that in quantum logic or axiomatic approaches? It's often done from the CCRs, which give CHSH, etc. Is there a proof that a theory that satisfies HUP (and what other conditions?) must violate the Bell inequalities? Otherwise, what you're proposing seems rather different from QM. –  Peter Morgan Apr 10 '11 at 17:23 Your Answer
6fbed9ad738993ff
Date of Award Degree Type Degree Name Honors Thesis Though classical random walks have been studied for many years, research concerning their quantum analogues, quantum random walks, has only come about recently. Numerous simulations of both types of walks have been run and analyzed, and are generally well-understood. Research pertaining to one of the more important properties of classical random walks, namely, their ability to build fractal structures in diffusion-limited aggregation, has been particularly noteworthy. However, nobody has yet pursued this avenue of research in the realm of quantum random walks. The study of random walks and the structures they build has various applications in materials science. Since all processes are quantum in nature, it is very important to consider the quantum variant of diffusion-limited aggregation. Quantum diffusion-limited aggregation is an important step forward in understanding particle aggregation in areas where quantum effects are dominant, such as low temperature chemistry and the development of techniques for forming thin films. Recognizing that the Schrödinger equation and a classical random walk are both dissusion equations, it is possible to connect and compare them. Using similar parameters for both equations, we ran various simulations aggregating particles. Our results show that the quantum diffusion process can create fractal structures, much like the classical random walk. Furthermore, the fractal dimensions of these quantum diffusion-limited aggregates vary between 1.43 and 2, depending on the size of the initial wave packet.
cb7b7a7f911e760b
Table of Contents Section 13.1 - Rules of Randomness Section 13.2 - Light As a Particle Section 13.3 - Matter As a Wave Section 13.4 - The Atom Chapter 13. Quantum Physics 13.1 Rules of Randomness a / In 1980, the continental U.S. got its first taste of active volcanism in recent memory with the eruption of Mount St. Helens. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective positions of the things which compose it...nothing would be uncertain, and the future as the past would be laid out before its eyes. -- Pierre Simon de Laplace, 1776 The energy produced by the atom is a very poor kind of thing. Anyone who expects a source of power from the transformation of these atoms is talking moonshine. -- Ernest Rutherford, 1933 The Quantum Mechanics is very imposing. But an inner voice tells me that it is still not the final truth. The theory yields much, but it hardly brings us nearer to the secret of the Old One. In any case, I am convinced that He does not play dice. -- Albert Einstein However radical Newton's clockwork universe seemed to his contemporaries, by the early twentieth century it had become a sort of smugly accepted dogma. Luckily for us, this deterministic picture of the universe breaks down at the atomic level. The clearest demonstration that the laws of physics contain elements of randomness is in the behavior of radioactive atoms. Pick two identical atoms of a radioactive isotope, say the naturally occurring uranium 238, and watch them carefully. They will decay at different times, even though there was no difference in their initial behavior. We would be in big trouble if these atoms' behavior was as predictable as expected in the Newtonian world-view, because radioactivity is an important source of heat for our planet. In reality, each atom chooses a random moment at which to release its energy, resulting in a nice steady heating effect. The earth would be a much colder planet if only sunlight heated it and not radioactivity. Probably there would be no volcanoes, and the oceans would never have been liquid. The deep-sea geothermal vents in which life first evolved would never have existed. But there would be an even worse consequence if radioactivity was deterministic: after a few billion years of peace, all the uranium 238 atoms in our planet would presumably pick the same moment to decay. The huge amount of stored nuclear energy, instead of being spread out over eons, would all be released at one instant, blowing our whole planet to Kingdom Come.1 The new version of physics, incorporating certain kinds of randomness, is called quantum physics (for reasons that will become clear later). It represented such a dramatic break with the previous, deterministic tradition that everything that came before is considered “classical,” even the theory of relativity. This chapter is a basic introduction to quantum physics. Discussion Question I said “Pick two identical atoms of a radioactive isotope.” Are two atoms really identical? If their electrons are orbiting the nucleus, can we distinguish each atom by the particular arrangement of its electrons at some instant in time? 13.1.1 Randomness isn't random. Einstein's distaste for randomness, and his association of determinism with divinity, goes back to the Enlightenment conception of the universe as a gigantic piece of clockwork that only had to be set in motion initially by the Builder. Many of the founders of quantum mechanics were interested in possible links between physics and Eastern and Western religious and philosophical thought, but every educated person has a different concept of religion and philosophy. Bertrand Russell remarked, “Sir Arthur Eddington deduces religion from the fact that atoms do not obey the laws of mathematics. Sir James Jeans deduces it from the fact that they do.” Russell's witticism, which implies incorrectly that mathematics cannot describe randomness, remind us how important it is not to oversimplify this question of randomness. You should not simply surmise, “Well, it's all random, anything can happen.” For one thing, certain things simply cannot happen, either in classical physics or quantum physics. The conservation laws of mass, energy, momentum, and angular momentum are still valid, so for instance processes that create energy out of nothing are not just unlikely according to quantum physics, they are impossible. A useful analogy can be made with the role of randomness in evolution. Darwin was not the first biologist to suggest that species changed over long periods of time. His two new fundamental ideas were that (1) the changes arose through random genetic variation, and (2) changes that enhanced the organism's ability to survive and reproduce would be preserved, while maladaptive changes would be eliminated by natural selection. Doubters of evolution often consider only the first point, about the randomness of natural variation, but not the second point, about the systematic action of natural selection. They make statements such as, “the development of a complex organism like Homo sapiens via random chance would be like a whirlwind blowing through a junkyard and spontaneously assembling a jumbo jet out of the scrap metal.” The flaw in this type of reasoning is that it ignores the deterministic constraints on the results of random processes. For an atom to violate conservation of energy is no more likely than the conquest of the world by chimpanzees next year. Discussion Question Economists often behave like wannabe physicists, probably because it seems prestigious to make numerical calculations instead of talking about human relationships and organizations like other social scientists. Their striving to make economics work like Newtonian physics extends to a parallel use of mechanical metaphors, as in the concept of a market's supply and demand acting like a self-adjusting machine, and the idealization of people as economic automatons who consistently strive to maximize their own wealth. What evidence is there for randomness rather than mechanical determinism in economics? b / Normalization: the probability of picking land plus the probability of picking water adds up to 1. c / Why are dice random? 13.1.2 Calculating randomness You should also realize that even if something is random, we can still understand it, and we can still calculate probabilities numerically. In other words, physicists are good bookmakers. A good bookmaker can calculate the odds that a horse will win a race much more accurately that an inexperienced one, but nevertheless cannot predict what will happen in any particular race. Statistical independence As an illustration of a general technique for calculating odds, suppose you are playing a 25-cent slot machine. Each of the three wheels has one chance in ten of coming up with a cherry. If all three wheels come up cherries, you win $100. Even though the results of any particular trial are random, you can make certain quantitative predictions. First, you can calculate that your odds of winning on any given trial are \(1/10\times1/10\times1/10=1/1000=0.001\). Here, I am representing the probabilities as numbers from 0 to 1, which is clearer than statements like “The odds are 999 to 1,” and makes the calculations easier. A probability of 0 represents something impossible, and a probability of 1 represents something that will definitely happen. Also, you can say that any given trial is equally likely to result in a win, and it doesn't matter whether you have won or lost in prior games. Mathematically, we say that each trial is statistically independent, or that separate games are uncorrelated. Most gamblers are mistakenly convinced that, to the contrary, games of chance are correlated. If they have been playing a slot machine all day, they are convinced that it is “getting ready to pay,” and they do not want anyone else playing the machine and “using up” the jackpot that they “have coming.” In other words, they are claiming that a series of trials at the slot machine is negatively correlated, that losing now makes you more likely to win later. Craps players claim that you should go to a table where the person rolling the dice is “hot,” because she is likely to keep on rolling good numbers. Craps players, then, believe that rolls of the dice are positively correlated, that winning now makes you more likely to win later. My method of calculating the probability of winning on the slot machine was an example of the following important rule for calculations based on independent probabilities: the law of independent probabilities If the probability of one event happening is \(P_A\), and the probability of a second statistically independent event happening is \(P_B\), then the probability that they will both occur is the product of the probabilities, \(P_AP_B\). If there are more than two events involved, you simply keep on multiplying. This can be taken as the definition of statistical independence. Note that this only applies to independent probabilities. For instance, if you have a nickel and a dime in your pocket, and you randomly pull one out, there is a probability of 0.5 that it will be the nickel. If you then replace the coin and again pull one out randomly, there is again a probability of 0.5 of coming up with the nickel, because the probabilities are independent. Thus, there is a probability of 0.25 that you will get the nickel both times. Suppose instead that you do not replace the first coin before pulling out the second one. Then you are bound to pull out the other coin the second time, and there is no way you could pull the nickel out twice. In this situation, the two trials are not independent, because the result of the first trial has an effect on the second trial. The law of independent probabilities does not apply, and the probability of getting the nickel twice is zero, not 0.25. Experiments have shown that in the case of radioactive decay, the probability that any nucleus will decay during a given time interval is unaffected by what is happening to the other nuclei, and is also unrelated to how long it has gone without decaying. The first observation makes sense, because nuclei are isolated from each other at the centers of their respective atoms, and therefore have no physical way of influencing each other. The second fact is also reasonable, since all atoms are identical. Suppose we wanted to believe that certain atoms were “extra tough,” as demonstrated by their history of going an unusually long time without decaying. Those atoms would have to be different in some physical way, but nobody has ever succeeded in detecting differences among atoms. There is no way for an atom to be changed by the experiences it has in its lifetime. Addition of probabilities The law of independent probabilities tells us to use multiplication to calculate the probability that both A and B will happen, assuming the probabilities are independent. What about the probability of an “or” rather than an “and”? If two events A and \(B\) are mutually exclusive, then the probability of one or the other occurring is the sum \(P_A+P_B\). For instance, a bowler might have a 30% chance of getting a strike (knocking down all ten pins) and a 20% chance of knocking down nine of them. The bowler's chance of knocking down either nine pins or ten pins is therefore 50%. It does not make sense to add probabilities of things that are not mutually exclusive, i.e., that could both happen. Say I have a 90% chance of eating lunch on any given day, and a 90% chance of eating dinner. The probability that I will eat either lunch or dinner is not 180%. If I spin a globe and randomly pick a point on it, I have about a 70% chance of picking a point that's in an ocean and a 30% chance of picking a point on land. The probability of picking either water or land is \(70%+30%=100%\). Water and land are mutually exclusive, and there are no other possibilities, so the probabilities had to add up to 100%. It works the same if there are more than two possibilities --- if you can classify all possible outcomes into a list of mutually exclusive results, then all the probabilities have to add up to 1, or 100%. This property of probabilities is known as normalization. Another way of dealing with randomness is to take averages. The casino knows that in the long run, the number of times you win will approximately equal the number of times you play multiplied by the probability of winning. In the slot-machine game described on page 829, where the probability of winning is 0.001, if you spend a week playing, and pay $2500 to play 10,000 times, you are likely to win about 10 times \((10,000\times0.001=10)\), and collect $1000. On the average, the casino will make a profit of $1500 from you. This is an example of the following rule. Rule for Calculating Averages If you conduct \(N\) identical, statistically independent trials, and the probability of success in each trial is \(P\), then on the average, the total number of successful trials will be \(NP\). If \(N\) is large enough, the relative error in this estimate will become small. The statement that the rule for calculating averages gets more and more accurate for larger and larger \(N\)(known popularly as the “law of averages”) often provides a correspondence principle that connects classical and quantum physics. For instance, the amount of power produced by a nuclear power plant is not random at any detectable level, because the number of atoms in the reactor is so large. In general, random behavior at the atomic level tends to average out when we consider large numbers of atoms, which is why physics seemed deterministic before physicists learned techniques for studying atoms individually. We can achieve great precision with averages in quantum physics because we can use identical atoms to reproduce exactly the same situation many times. If we were betting on horses or dice, we would be much more limited in our precision. After a thousand races, the horse would be ready to retire. After a million rolls, the dice would be worn out. Which of the following things must be independent, which could be independent, and which definitely are not independent? (1) the probability of successfully making two free-throws in a row in basketball; (2) the probability that it will rain in London tomorrow and the probability that it will rain on the same day in a certain city in a distant galaxy; (3) your probability of dying today and of dying tomorrow. Discussion Questions Newtonian physics is an essentially perfect approximation for describing the motion of a pair of dice. If Newtonian physics is deterministic, why do we consider the result of rolling dice to be random? Why isn't it valid to define randomness by saying that randomness is when all the outcomes are equally likely? The sequence of digits 121212121212121212 seems clearly nonrandom, and 41592653589793 seems random. The latter sequence, however, is the decimal form of pi, starting with the third digit. There is a story about the Indian mathematician Ramanujan, a self-taught prodigy, that a friend came to visit him in a cab, and remarked that the number of the cab, 1729, seemed relatively uninteresting. Ramanujan replied that on the contrary, it was very interesting because it was the smallest number that could be represented in two different ways as the sum of two cubes. The Argentine author Jorge Luis Borges wrote a short story called “The Library of Babel,” in which he imagined a library containing every book that could possibly be written using the letters of the alphabet. It would include a book containing only the repeated letter “a;” all the ancient Greek tragedies known today, all the lost Greek tragedies, and millions of Greek tragedies that were never actually written; your own life story, and various incorrect versions of your own life story; and countless anthologies containing a short story called “The Library of Babel.” Of course, if you picked a book from the shelves of the library, it would almost certainly look like a nonsensical sequence of letters and punctuation, but it's always possible that the seemingly meaningless book would be a science-fiction screenplay written in the language of a Neanderthal tribe, or the lyrics to a set of incomparably beautiful love songs written in a language that never existed. In view of these examples, what does it really mean to say that something is random? d / Probability distribution for the result of rolling a single die. e / Rolling two dice and adding them up. f / A probability distribution for height of human adults (not real data). g / Example 1. h / The average of a probability distribution. i / The full width at half maximum (FWHM) of a probability distribution. 13.1.3 Probability distributions So far we've discussed random processes having only two possible outcomes: yes or no, win or lose, on or off. More generally, a random process could have a result that is a number. Some processes yield integers, as when you roll a die and get a result from one to six, but some are not restricted to whole numbers, for example the number of seconds that a uranium-238 atom will exist before undergoing radioactive decay. Consider a throw of a die. If the die is “honest,” then we expect all six values to be equally likely. Since all six probabilities must add up to 1, then probability of any particular value coming up must be 1/6. We can summarize this in a graph, d. Areas under the curve can be interpreted as total probabilities. For instance, the area under the curve from 1 to 3 is \(1/6+1/6+1/6=1/2\), so the probability of getting a result from 1 to 3 is 1/2. The function shown on the graph is called the probability distribution. Figure e shows the probabilities of various results obtained by rolling two dice and adding them together, as in the game of craps. The probabilities are not all the same. There is a small probability of getting a two, for example, because there is only one way to do it, by rolling a one and then another one. The probability of rolling a seven is high because there are six different ways to do it: 1+6, 2+5, etc. If the number of possible outcomes is large but finite, for example the number of hairs on a dog, the graph would start to look like a smooth curve rather than a ziggurat. What about probability distributions for random numbers that are not integers? We can no longer make a graph with probability on the \(y\) axis, because the probability of getting a given exact number is typically zero. For instance, there is zero probability that a radioactive atom will last for exactly 3 seconds, since there is are infinitely many possible results that are close to 3 but not exactly three: 2.999999999999999996876876587658465436, for example. It doesn't usually make sense, therefore, to talk about the probability of a single numerical result, but it does make sense to talk about the probability of a certain range of results. For instance, the probability that an atom will last more than 3 and less than 4 seconds is a perfectly reasonable thing to discuss. We can still summarize the probability information on a graph, and we can still interpret areas under the curve as probabilities. But the \(y\) axis can no longer be a unitless probability scale. In radioactive decay, for example, we want the \(x\) axis to have units of time, and we want areas under the curve to be unitless probabilities. The area of a single square on the graph paper is then \[\begin{gather*} \text{(unitless area of a square)} \\ = \text{(width of square with time units)}\\ \times \text{(height of square)} . \end{gather*}\] If the units are to cancel out, then the height of the square must evidently be a quantity with units of inverse time. In other words, the \(y\) axis of the graph is to be interpreted as probability per unit time, not probability. Figure f shows another example, a probability distribution for people's height. This kind of bell-shaped curve is quite common. Compare the number of people with heights in the range of 130-135 cm to the number in the range 135-140. Example 1: Looking for tall basketball players \(\triangleright\) A certain country with a large population wants to find very tall people to be on its Olympic basketball team and strike a blow against western imperialism. Out of a pool of \(10^8\) people who are the right age and gender, how many are they likely to find who are over 225 cm (7 feet 4 inches) in height? Figure g gives a close-up of the “tail” of the distribution shown previously in figure f. \(\triangleright\) The shaded area under the curve represents the probability that a given person is tall enough. Each rectangle represents a probability of \(0.2\times10^{-7}\ \text{cm}^{-1} \times 1\ \text{cm}=2\times10^{-8}\). There are about 35 rectangles covered by the shaded area, so the probability of having a height greater than 225 cm is \(7\times10^{-7}\) , or just under one in a million. Using the rule for calculating averages, the average, or expected number of people this tall is \((10^8)\times(7\times10^{-7})=70\). Average and width of a probability distribution If the next Martian you meet asks you, “How tall is an adult human?,” you will probably reply with a statement about the average human height, such as “Oh, about 5 feet 6 inches.” If you wanted to explain a little more, you could say, “But that's only an average. Most people are somewhere between 5 feet and 6 feet tall.” Without bothering to draw the relevant bell curve for your new extraterrestrial acquaintance, you've summarized the relevant information by giving an average and a typical range of variation. The average of a probability distribution can be defined geometrically as the horizontal position at which it could be balanced if it was constructed out of cardboard. A convenient numerical measure of the amount of variation about the average, or amount of uncertainty, is the full width at half maximum, or FWHM, shown in figure i. A great deal more could be said about this topic, and indeed an introductory statistics course could spend months on ways of defining the center and width of a distribution. Rather than force-feeding you on mathematical detail or techniques for calculating these things, it is perhaps more relevant to point out simply that there are various ways of defining them, and to inoculate you against the misuse of certain definitions. The average is not the only possible way to say what is a typical value for a quantity that can vary randomly; another possible definition is the median, defined as the value that is exceeded with 50% probability. When discussing incomes of people living in a certain town, the average could be very misleading, since it can be affected massively if a single resident of the town is Bill Gates. Nor is the FWHM the only possible way of stating the amount of random variation; another possible way of measuring it is the standard deviation (defined as the square root of the average squared deviation from the average value). 13.1.4 Exponential decay and half-life Most people know that radioactivity “lasts a certain amount of time,” but that simple statement leaves out a lot. As an example, consider the following medical procedure used to diagnose thyroid function. A very small quantity of the isotope \(^{131}\text{I}\), produced in a nuclear reactor, is fed to or injected into the patient. The body's biochemical systems treat this artificial, radioactive isotope exactly the same as \(^{127}\text{I}\), which is the only naturally occurring type. (Nutritionally, iodine is a necessary trace element. Iodine taken into the body is partly excreted, but the rest becomes concentrated in the thyroid gland. Iodized salt has had iodine added to it to prevent the nutritional deficiency known as goiters, in which the iodine-starved thyroid becomes swollen.) As the \(^{131}\text{I}\) undergoes beta decay, it emits electrons, neutrinos, and gamma rays. The gamma rays can be measured by a detector passed over the patient's body. As the radioactive iodine becomes concentrated in the thyroid, the amount of gamma radiation coming from the thyroid becomes greater, and that emitted by the rest of the body is reduced. The rate at which the iodine concentrates in the thyroid tells the doctor about the health of the thyroid. If you ever undergo this procedure, someone will presumably explain a little about radioactivity to you, to allay your fears that you will turn into the Incredible Hulk, or that your next child will have an unusual number of limbs. Since iodine stays in your thyroid for a long time once it gets there, one thing you'll want to know is whether your thyroid is going to become radioactive forever. They may just tell you that the radioactivity “only lasts a certain amount of time,” but we can now carry out a quantitative derivation of how the radioactivity really will die out. Let \(P_{surv}(t)\) be the probability that an iodine atom will survive without decaying for a period of at least \(t\). It has been experimentally measured that half all \(^{131}\text{I}\) atoms decay in 8 hours, so we have \[\begin{equation*} P_{surv}(8\ \text{hr}) = 0.5 . \end{equation*}\] Now using the law of independent probabilities, the probability of surviving for 16 hours equals the probability of surviving for the first 8 hours multiplied by the probability of surviving for the second 8 hours, \[\begin{align*} P_{surv}(16\ \text{hr}) &= 0.50\times0.50 \\ &= 0.25 . \end{align*}\] Similarly we have \[\begin{align*} P_{surv}(24\ \text{hr}) &= 0.50\times0.5\times0.5 \\ &= 0.125 . \end{align*}\] Generalizing from this pattern, the probability of surviving for any time \(t\) that is a multiple of 8 hours is \[\begin{equation*} P_{surv}(t) = 0.5^{t/8\ \text{hr}} . \end{equation*}\] We now know how to find the probability of survival at intervals of 8 hours, but what about the points in time in between? What would be the probability of surviving for 4 hours? Well, using the law of independent probabilities again, we have \[\begin{equation*} P_{surv}(8\ \text{hr}) = P_{surv}(4\ \text{hr}) \times P_{surv}(4\ \text{hr}) , \end{equation*}\] which can be rearranged to give \[\begin{align*} P_{surv}(4\ \text{hr}) &= \sqrt{P_{surv}(8\ \text{hr})} \\ &= \sqrt{0.5} \\ &= 0.707 . \end{align*}\] This is exactly what we would have found simply by plugging in \(P_{surv}(t)=0.5^{t/8\ \text{hr}}\) and ignoring the restriction to multiples of 8 hours. Since 8 hours is the amount of time required for half of the atoms to decay, it is known as the half-life, written \(t_{1/2}\). The general rule is as follows: Exponential Decay Equation \[\begin{equation*} P_{surv}(t) = 0.5^{t/t_{1/2}} \end{equation*}\] Using the rule for calculating averages, we can also find the number of atoms, \(N(t)\), remaining in a sample at time \(t \): \[\begin{equation*} N(t) = N(0) \times 0.5^{t/t_{1/2}} \end{equation*}\] Both of these equations have graphs that look like dying-out exponentials, as in the example below. Example 2: Radioactive contamination at Chernobyl \(\triangleright\) One of the most dangerous radioactive isotopes released by the Chernobyl disaster in 1986 was \(^{90}\text{Sr}\), whose half-life is 28 years. (a) How long will it be before the contamination is reduced to one tenth of its original level? (b) If a total of \(10^{27}\) atoms was released, about how long would it be before not a single atom was left? \(\triangleright\) (a) We want to know the amount of time that a \(^{90}\text{Sr}\) nucleus has a probability of 0.1 of surviving. Starting with the exponential decay formula, \[\begin{equation*} P_{surv} = 0.5^{t/t_{1/2}} , \end{equation*}\] we want to solve for \(t\). Taking natural logarithms of both sides, \[\begin{equation*} \ln P = \frac{t}{t_{1/2}}\ln 0.5 , \end{equation*}\] \[\begin{equation*} t = \frac{t_{1/2}}{\ln 0.5}\ln P \end{equation*}\] Plugging in \(P=0.1\) and \(t_{1/2}=28\) years, we get \(t=93\) years. (b) This is just like the first part, but \(P=10^{-27}\) . The result is about 2500 years. j / Calibration of the \(^{14}\text{C}\) dating method using tree rings and artifacts whose ages were known from other methods. Redrawn from Emilio Segrè, Nuclei and Particles, 1965. Example 3: \(^{14}\text{C}\) Dating \textup{C}\( Dating} Almost all the carbon on Earth is \)^{12}\textup{C}\(, but not quite. The isotope \)^{14}\textup{C}\(, with a half-life of 5600 years, is produced by cosmic rays in the atmosphere. It decays naturally, but is replenished at such a rate that the fraction of \)^{14}\textup{C}\( in the atmosphere remains constant, at \)1.3\times10^{-12}\( . Living plants and animals take in both \)^{12}\textup{C}\( and \)^{14}\textup{C}\( from the atmosphere and incorporate both into their bodies. Once the living organism dies, it no longer takes in C atoms from the atmosphere, and the proportion of \)^{14}\textup{C}\( gradually falls off as it undergoes radioactive decay. This effect can be used to find the age of dead organisms, or human artifacts made from plants or animals. Figure j on page 838 shows the exponential decay curve of \)^{14}\textup{C}$ in various objects. Similar methods, using longer-lived isotopes, provided the first firm proof that the earth was billions of years old, not a few thousand as some had claimed on religious grounds. Rate of decay If you want to find how many radioactive decays occur within a time interval lasting from time \(t\) to time \(t+\Delta t\), the most straightforward approach is to calculate it like this: \[\begin{align*} (\text{number of}&\text{ decays between } t \text{ and } t+\Delta t) \\ &= N(t) - N(t+\Delta t) \end{align*}\] Usually we're interested in the case where \(\Delta t\) is small compared to \(t_{1/2}\), and in this limiting case the calculation starts to look exactly like the limit that goes into the definition of the derivative \(dN/dt\). It is therefore more convenient to talk about the rate of decay \(-dN/dt\) rather than the number of decays in some finite time interval. Doing calculus on the function \(e^x\) is also easier than with \(0.5^x\), so we rewrite the function \(N(t)\) as \[\begin{equation*} N = N(0) e^{-t/\tau} , \end{equation*}\] where \(\tau=t_{1/2}/\ln 2\) is shown in example 6 on p. 841 to be the average time of survival. The rate of decay is then \[\begin{equation*} -\frac{dN}{dt} = \frac{N(0)}{\tau} e^{-t/\tau} . \end{equation*}\] Mathematically, differentating an exponential just gives back another exponential. Physically, this is telling us that as \(N\) falls off exponentially, the rate of decay falls off at the same exponential rate, because a lower \(N\) means fewer atoms that remain available to decay. Check that both sides of the equation for the rate of decay have units of \(\text{s}^{-1}\), i.e., decays per unit time. Example 4: The hot potato \(\triangleright\) A nuclear physicist with a demented sense of humor tosses you a cigar box, yelling “hot potato.” The label on the box says “contains \(10^{20}\) atoms of \(^{17}\text{F}\), half-life of 66 s, produced today in our reactor at 1 p.m.” It takes you two seconds to read the label, after which you toss it behind some lead bricks and run away. The time is 1:40 p.m. Will you die? \(\triangleright\) The time elapsed since the radioactive fluorine was produced in the reactor was 40 minutes, or 2400 s. The number of elapsed half-lives is therefore \(t/t_{1/2}= 36\). The initial number of atoms was \(N(0)=10^{20}\) . The number of decays per second is now about \(10^7\ \text{s}^{-1}\), so it produced about \(2\times10^7\) high-energy electrons while you held it in your hands. Although twenty million electrons sounds like a lot, it is not really enough to be dangerous. By the way, none of the equations we've derived so far was the actual probability distribution for the time at which a particular radioactive atom will decay. That probability distribution would be found by substituting \(N(0)=1\) into the equation for the rate of decay. Discussion Questions In the medical procedure involving \(^{131}\text{I}\), why is it the gamma rays that are detected, not the electrons or neutrinos that are also emitted? For 1 s, Fred holds in his hands 1 kg of radioactive stuff with a half-life of 1000 years. Ginger holds 1 kg of a different substance, with a half-life of 1 min, for the same amount of time. Did they place themselves in equal danger, or not? How would you interpret it if you calculated \(N(t)\), and found it was less than one? Does the half-life depend on how much of the substance you have? Does the expected time until the sample decays completely depend on how much of the substance you have? 13.1.5 Applications of calculus The area under the probability distribution is of course an integral. If we call the random number \(x\) and the probability distribution \(D(x)\), then the probability that \(x\) lies in a certain range is given by \[\begin{equation*} \text{(probability of $a\le x \le b$)}=\int_a^b D(x) dx . \end{equation*}\] What about averages? If \(x\) had a finite number of equally probable values, we would simply add them up and divide by how many we had. If they weren't equally likely, we'd make the weighted average \(x_1P_1+x_2P_2+\)... But we need to generalize this to a variable \(x\) that can take on any of a continuum of values. The continuous version of a sum is an integral, so the average is \[\begin{equation*} \text{(average value of $x$)} = \int x D(x) dx , \end{equation*}\] where the integral is over all possible values of \(x\). Example 5: Probability distribution for radioactive decay Here is a rigorous justification for the statement in subsection 13.1.4 that the probability distribution for radioactive decay is found by substituting \(N(0)=1\) into the equation for the rate of decay. We know that the probability distribution must be of the form \[\begin{equation*} D(t) = k 0.5^{t/t_{1/2}} , \end{equation*}\] where \(k\) is a constant that we need to determine. The atom is guaranteed to decay eventually, so normalization gives us \[\begin{align*} \text{(probability of $0\le t \lt \infty$)} &= 1 \\ &= \int_0^\infty D(t) dt . \end{align*}\] The integral is most easily evaluated by converting the function into an exponential with \(e\) as the base \[\begin{align*} D(t) &= k \exp\left[\ln\left(0.5^{t/t_{1/2}}\right)\right] \\ &= k \exp\left[\frac{t}{t_{1/2}}\ln 0.5\right] \\ &= k \exp\left(-\frac{\ln 2}{t_{1/2}}t\right) , \end{align*}\] which gives an integral of the familiar form \(\int e^{cx}dx=(1/c)e^{cx}\). We thus have \[\begin{equation*} 1 = \left.-\frac{kt_{1/2}}{\ln 2}\exp\left(-\frac{\ln 2}{t_{1/2}}t\right)\right]_0^\infty , \end{equation*}\] which gives the desired result: \[\begin{equation*} k = \frac{\ln 2}{t_{1/2}} . \end{equation*}\] Example 6: Average lifetime You might think that the half-life would also be the average lifetime of an atom, since half the atoms' lives are shorter and half longer. But the half whose lives are longer include some that survive for many half-lives, and these rare long-lived atoms skew the average. We can calculate the average lifetime as follows: \[\begin{equation*} (\text{average lifetime}) = \int_0^\infty t\: D(t)dt \end{equation*}\] Using the convenient base-\(e\) form again, we have \[\begin{equation*} (\text{average lifetime}) = \frac{\ln 2}{t_{1/2}} \int_0^\infty t \exp\left(-\frac{\ln 2}{t_{1/2}}t\right) dt . \end{equation*}\] This integral is of a form that can either be attacked with integration by parts or by looking it up in a table. The result is \(\int x e^{cx}dx=\frac{x}{c}e^{cx}-\frac{1}{c^2}e^{cx}\), and the first term can be ignored for our purposes because it equals zero at both limits of integration. We end up with \[\begin{align*} \text{(average lifetime)} &= \frac{\ln 2}{t_{1/2}}\left(\frac{t_{1/2}}{\ln 2}\right)^2 \\ &= \frac{t_{1/2}}{\ln 2} \\ &= 1.443 \: t_{1/2} , \end{align*}\] which is, as expected, longer than one half-life. k / In recent decades, a huge hole in the ozone layer has spread out from Antarctica. Left: November 1978. Right: November 1992 13.2 Light As a Particle The only thing that interferes with my learning is my education. -- Albert Einstein Radioactivity is random, but do the laws of physics exhibit randomness in other contexts besides radioactivity? Yes. Radioactive decay was just a good playpen to get us started with concepts of randomness, because all atoms of a given isotope are identical. By stocking the playpen with an unlimited supply of identical atom-toys, nature helped us to realize that their future behavior could be different regardless of their original identicality. We are now ready to leave the playpen, and see how randomness fits into the structure of physics at the most fundamental level. The laws of physics describe light and matter, and the quantum revolution rewrote both descriptions. Radioactivity was a good example of matter's behaving in a way that was inconsistent with classical physics, but if we want to get under the hood and understand how nonclassical things happen, it will be easier to focus on light rather than matter. A radioactive atom such as uranium-235 is after all an extremely complex system, consisting of 92 protons, 143 neutrons, and 92 electrons. Light, however, can be a simple sine wave. However successful the classical wave theory of light had been --- allowing the creation of radio and radar, for example --- it still failed to describe many important phenomena. An example that is currently of great interest is the way the ozone layer protects us from the dangerous short-wavelength ultraviolet part of the sun's spectrum. In the classical description, light is a wave. When a wave passes into and back out of a medium, its frequency is unchanged, and although its wavelength is altered while it is in the medium, it returns to its original value when the wave reemerges. Luckily for us, this is not at all what ultraviolet light does when it passes through the ozone layer, or the layer would offer no protection at all! b / A wave is partially absorbed. c / A stream of particles is partially absorbed. d / Einstein and Seurat: twins separated at birth? Seine Grande Jatte by Georges Seurat (19th century). 13.2.1 Evidence for light as a particle For a long time, physicists tried to explain away the problems with the classical theory of light as arising from an imperfect understanding of atoms and the interaction of light with individual atoms and molecules. The ozone paradox, for example, could have been attributed to the incorrect assumption that one could think of the ozone layer as a smooth, continuous substance, when in reality it was made of individual ozone molecules. It wasn't until 1905 that Albert Einstein threw down the gauntlet, proposing that the problem had nothing to do with the details of light's interaction with atoms and everything to do with the fundamental nature of light itself. a / Digital camera images of dimmer and dimmer sources of light. The dots are records of individual photons. In those days the data were sketchy, the ideas vague, and the experiments difficult to interpret; it took a genius like Einstein to cut through the thicket of confusion and find a simple solution. Today, however, we can get right to the heart of the matter with a piece of ordinary consumer electronics, the digital camera. Instead of film, a digital camera has a computer chip with its surface divided up into a grid of light-sensitive squares, called “pixels.” Compared to a grain of the silver compound used to make regular photographic film, a digital camera pixel is activated by an amount of light energy orders of magnitude smaller. We can learn something new about light by using a digital camera to detect smaller and smaller amounts of light, as shown in figure a. Figure a/1 is fake, but a/2 and a/3 are real digital-camera images made by Prof. Lyman Page of Princeton University as a classroom demonstration. Figure a/1 is what we would see if we used the digital camera to take a picture of a fairly dim source of light. In figures a/2 and a/3, the intensity of the light was drastically reduced by inserting semitransparent absorbers like the tinted plastic used in sunglasses. Going from a/1 to a/2 to a/3, more and more light energy is being thrown away by the absorbers. The results are drastically different from what we would expect based on the wave theory of light. If light was a wave and nothing but a wave, b, then the absorbers would simply cut down the wave's amplitude across the whole wavefront. The digital camera's entire chip would be illuminated uniformly, and weakening the wave with an absorber would just mean that every pixel would take a long time to soak up enough energy to register a signal. But figures a/2 and a/3 show that some pixels take strong hits while others pick up no energy at all. Instead of the wave picture, the image that is naturally evoked by the data is something more like a hail of bullets from a machine gun, c. Each “bullet” of light apparently carries only a tiny amount of energy, which is why detecting them individually requires a sensitive digital camera rather than an eye or a piece of film. Although Einstein was interpreting different observations, this is the conclusion he reached in his 1905 paper: that the pure wave theory of light is an oversimplification, and that the energy of a beam of light comes in finite chunks rather than being spread smoothly throughout a region of space. We now think of these chunks as particles of light, and call them “photons,” although Einstein avoided the word “particle,” and the word “photon” was invented later. Regardless of words, the trouble was that waves and particles seemed like inconsistent categories. The reaction to Einstein's paper could be kindly described as vigorously skeptical. Even twenty years later, Einstein wrote, “There are therefore now two theories of light, both indispensable, and --- as one must admit today despite twenty years of tremendous effort on the part of theoretical physicists --- without any logical connection.” In the remainder of this chapter we will learn how the seeming paradox was eventually resolved. Discussion Questions Suppose someone rebuts the digital camera data in figure a, claiming that the random pattern of dots occurs not because of anything fundamental about the nature of light but simply because the camera's pixels are not all exactly the same --- some are just more sensitive than others. How could we test this interpretation? Discuss how the correspondence principle applies to the observations and concepts discussed in this section. e / Apparatus for observing the photoelectric effect. A beam of light strikes a capacitor plate inside a vacuum tube, and electrons are ejected (black arrows). f / The hamster in her hamster ball is like an electron emerging from the metal (tiled kitchen floor) into the surrounding vacuum (wood floor). The wood floor is higher than the tiled floor, so as she rolls up the step, the hamster will lose a certain amount of kinetic energy, analogous to \(E_s\). If her kinetic energy is too small, she won't even make it up the step. g / A different way of studying the photoelectric effect. h / The quantity \(E_s+e\Delta V\) indicates the energy of one photon. It is found to be proportional to the frequency of the light. 13.2.2 How much light is one photon? The photoelectric effect We have seen evidence that light energy comes in little chunks, so the next question to be asked is naturally how much energy is in one chunk. The most straightforward experimental avenue for addressing this question is a phenomenon known as the photoelectric effect. The photoelectric effect occurs when a photon strikes the surface of a solid object and knocks out an electron. It occurs continually all around you. It is happening right now at the surface of your skin and on the paper or computer screen from which you are reading these words. It does not ordinarily lead to any observable electrical effect, however, because on the average free electrons are wandering back in just as frequently as they are being ejected. (If an object did somehow lose a significant number of electrons, its growing net positive charge would begin attracting the electrons back more and more strongly.) Figure e shows a practical method for detecting the photoelectric effect. Two very clean parallel metal plates (the electrodes of a capacitor) are sealed inside a vacuum tube, and only one plate is exposed to light. Because there is a good vacuum between the plates, any ejected electron that happens to be headed in the right direction will almost certainly reach the other capacitor plate without colliding with any air molecules. The illuminated (bottom) plate is left with a net positive charge, and the unilluminated (top) plate acquires a negative charge from the electrons deposited on it. There is thus an electric field between the plates, and it is because of this field that the electrons' paths are curved, as shown in the diagram. However, since vacuum is a good insulator, any electrons that reach the top plate are prevented from responding to the electrical attraction by jumping back across the gap. Instead they are forced to make their way around the circuit, passing through an ammeter. The ammeter allows a measurement of the strength of the photoelectric effect. An unexpected dependence on frequency The photoelectric effect was discovered serendipitously by Heinrich Hertz in 1887, as he was experimenting with radio waves. He was not particularly interested in the phenomenon, but he did notice that the effect was produced strongly by ultraviolet light and more weakly by lower frequencies. Light whose frequency was lower than a certain critical value did not eject any electrons at all. (In fact this was all prior to Thomson's discovery of the electron, so Hertz would not have described the effect in terms of electrons --- we are discussing everything with the benefit of hindsight.) This dependence on frequency didn't make any sense in terms of the classical wave theory of light. A light wave consists of electric and magnetic fields. The stronger the fields, i.e., the greater the wave's amplitude, the greater the forces that would be exerted on electrons that found themselves bathed in the light. It should have been amplitude (brightness) that was relevant, not frequency. The dependence on frequency not only proves that the wave model of light needs modifying, but with the proper interpretation it allows us to determine how much energy is in one photon, and it also leads to a connection between the wave and particle models that we need in order to reconcile them. To make any progress, we need to consider the physical process by which a photon would eject an electron from the metal electrode. A metal contains electrons that are free to move around. Ordinarily, in the interior of the metal, such an electron feels attractive forces from atoms in every direction around it. The forces cancel out. But if the electron happens to find itself at the surface of the metal, the attraction from the interior side is not balanced out by any attraction from outside. In popping out through the surface the electron therefore loses some amount of energy \(E_s\), which depends on the type of metal used. Suppose a photon strikes an electron, annihilating itself and giving up all its energy to the electron. (We now know that this is what always happens in the photoelectric effect, although it had not yet been established in 1905 whether or not the photon was completely annihilated.) The electron will (1) lose kinetic energy through collisions with other electrons as it plows through the metal on its way to the surface; (2) lose an amount of kinetic energy equal to \(E_s\) as it emerges through the surface; and (3) lose more energy on its way across the gap between the plates, due to the electric field between the plates. Even if the electron happens to be right at the surface of the metal when it absorbs the photon, and even if the electric field between the plates has not yet built up very much, \(E_s\) is the bare minimum amount of energy that it must receive from the photon if it is to contribute to a measurable current. The reason for using very clean electrodes is to minimize \(E_s\) and make it have a definite value characteristic of the metal surface, not a mixture of values due to the various types of dirt and crud that are present in tiny amounts on all surfaces in everyday life. We can now interpret the frequency dependence of the photoelectric effect in a simple way: apparently the amount of energy possessed by a photon is related to its frequency. A low-frequency red or infrared photon has an energy less than \(E_s\), so a beam of them will not produce any current. A high-frequency blue or violet photon, on the other hand, packs enough of a punch to allow an electron to make it to the other plate. At frequencies higher than the minimum, the photoelectric current continues to increase with the frequency of the light because of effects (1) and (3). Numerical relationship between energy and frequency Prompted by Einstein's photon paper, Robert Millikan (whom we first encountered in chapter 8) figured out how to use the photoelectric effect to probe precisely the link between frequency and photon energy. Rather than going into the historical details of Millikan's actual experiments (a lengthy experimental program that occupied a large part of his professional career) we will describe a simple version, shown in figure g, that is used sometimes in college laboratory courses.2 The idea is simply to illuminate one plate of the vacuum tube with light of a single wavelength and monitor the voltage difference between the two plates as they charge up. Since the resistance of a voltmeter is very high (much higher than the resistance of an ammeter), we can assume to a good approximation that electrons reaching the top plate are stuck there permanently, so the voltage will keep on increasing for as long as electrons are making it across the vacuum tube. At a moment when the voltage difference has a reached a value \(\Delta \)V, the minimum energy required by an electron to make it out of the bottom plate and across the gap to the other plate is \(E_s+e\Delta \)V. As \(\Delta V\) increases, we eventually reach a point at which \(E_s+e\Delta V\) equals the energy of one photon. No more electrons can cross the gap, and the reading on the voltmeter stops rising. The quantity \(E_s+e\Delta V\) now tells us the energy of one photon. If we determine this energy for a variety of wavelengths, h, we find the following simple relationship between the energy of a photon and the frequency of the light: \[\begin{equation*} E = hf , \end{equation*}\] where \(h\) is a constant with the value \(6.63\times10^{-34}\ \text{J}\cdot\text{s}\). Note how the equation brings the wave and particle models of light under the same roof: the left side is the energy of one particle of light, while the right side is the frequency of the same light, interpreted as a wave. The constant \(h\) is known as Planck's constant, for historical reasons explained in the footnote beginning on the preceding page. How would you extract \(h\) from the graph in figure h? What if you didn't even know \(E_s\) in advance, and could only graph \(e\Delta V\) versus \(f\)? Since the energy of a photon is \(hf\), a beam of light can only have energies of \(hf\), \(2hf\), \(3hf\), etc. Its energy is quantized --- there is no such thing as a fraction of a photon. Quantum physics gets its name from the fact that it quantizes quantities like energy, momentum, and angular momentum that had previously been thought to be smooth, continuous and infinitely divisible. Example 7: Number of photons emitted by a lightbulb per second \(\triangleright\) Roughly how many photons are emitted by a 100-W lightbulb in 1 second? \(\triangleright\) People tend to remember wavelengths rather than frequencies for visible light. The bulb emits photons with a range of frequencies and wavelengths, but let's take 600 nm as a typical wavelength for purposes of estimation. The energy of a single photon is \[\begin{align*} E_{photon} &= hf \\ &= hc/\lambda \end{align*}\] A power of 100 W means 100 joules per second, so the number of photons is \[\begin{align*} (100\ \text{J})/E_{photon} &= (100\ \text{J}) / (hc/\lambda ) \\ &\approx 3\times10^{20} \end{align*}\] This hugeness of this number is consistent with the correspondence principle. The experiments that established the classical theory of optics weren't wrong. They were right, within their domain of applicability, in which the number of photons was so large as to be indistinguishable from a continuous beam. Example 8: Measuring the wave When surfers are out on the water waiting for their chance to catch a wave, they're interested in both the height of the waves and when the waves are going to arrive. In other words, they observe both the amplitude and phase of the waves, and it doesn't matter to them that the water is granular at the molecular level. The correspondence principle requires that we be able to do the same thing for electromagnetic waves, since the classical theory of electricity and magnetism was all stated and verified experimentally in terms of the fields \(\mathbf{E}\) and \(\mathbf{B}\), which are the amplitude of an electromagnetic wave. The phase is also necessary, since the induction effects predicted by Maxwell's equation would flip their signs depending on whether an oscillating field is on its way up or on its way back down. This is a more demanding application of the correspondence principle than the one in example 7, since amplitudes and phases constitute more detailed information than the over-all intensity of a beam of light. Eyeball measurements can't detect this type of information, since the eye is much bigger than a wavelength, but for example an AM radio receiver can do it with radio waves, since the wavelength for a station at 1000 kHz is about 300 meters, which is much larger than the antenna. The correspondence principle demands that we be able to explain this in terms of the photon theory, and this requires not just that we have a large number of photons emitted by the transmitter per second, as in example 7, but that even by the time they spread out and reach the receiving antenna, there should be many photons overlapping each other within a space of one cubic wavelength. Problem 47 on p. 909 verifies that the number is in fact extremely large. Example 9: Momentum of a photon \(\triangleright\) According to the theory of relativity, the momentum of a beam of light is given by \(p=E/c\). Apply this to find the momentum of a single photon in terms of its frequency, and in terms of its wavelength. \(\triangleright\) Combining the equations \(p=E/c\) and \(E=hf\), we find \[\begin{align*} p &= E/c \\ &= \frac{h}{c}f . \end{align*}\] To reexpress this in terms of wavelength, we use \(c=f\lambda \): \[\begin{align*} p &= \frac{h}{c}\cdot\frac{c}{\lambda} \\ &= \frac{h}{\lambda} \end{align*}\] The second form turns out to be simpler. Discussion Questions The photoelectric effect only ever ejects a very tiny percentage of the electrons available near the surface of an object. How well does this agree with the wave model of light, and how well with the particle model? Consider the two different distance scales involved: the wavelength of the light, and the size of an atom, which is on the order of \(10^{-10}\) or \(10^{-9}\) m. What is the significance of the fact that Planck's constant is numerically very small? How would our everyday experience of light be different if it was not so small? How would the experiments described above be affected if a single electron was likely to get hit by more than one photon? Draw some representative trajectories of electrons for \(\Delta V=0\), \(\Delta V\) less than the maximum value, and \(\Delta V\) greater than the maximum value. Explain based on the photon theory of light why ultraviolet light would be more likely than visible or infrared light to cause cancer by damaging DNA molecules. How does this relate to discussion question C? Does \(E=hf\) imply that a photon changes its energy when it passes from one transparent material into another substance with a different index of refraction? j / Bullets pass through a double slit. k / A water wave passes through a double slit. l / A single photon can go through both slits. m / Example 10. 13.2.3 Wave-particle duality How can light be both a particle and a wave? We are now ready to resolve this seeming contradiction. Often in science when something seems paradoxical, it's because we (1) don't define our terms carefully, or (2) don't test our ideas against any specific real-world situation. Let's define particles and waves as follows: As a real-world check on our philosophizing, there is one particular experiment that works perfectly. We set up a double-slit interference experiment that we know will produce a diffraction pattern if light is an honest-to-goodness wave, but we detect the light with a detector that is capable of sensing individual photons, e.g., a digital camera. To make it possible to pick out individual dots due to individual photons, we must use filters to cut down the intensity of the light to a very low level, just as in the photos by Prof. Page on p. 843. The whole thing is sealed inside a light-tight box. The results are shown in figure i. (In fact, the similar figures in on page 843 are simply cutouts from these figures.) Neither the pure wave theory nor the pure particle theory can explain the results. If light was only a particle and not a wave, there would be no interference effect. The result of the experiment would be like firing a hail of bullets through a double slit, j. Only two spots directly behind the slits would be hit. If, on the other hand, light was only a wave and not a particle, we would get the same kind of diffraction pattern that would happen with a water wave, k. There would be no discrete dots in the photo, only a diffraction pattern that shaded smoothly between light and dark. A wrong interpretation: photons interfering with each other The concept of a photon's path is undefined. Another wrong interpretation: the pilot wave hypothesis The probability interpretation \[\begin{equation*} (\text{probability distribution}) \propto (\text{amplitude})^2 . \end{equation*}\] Example 10: A microwave oven \(\triangleright\) The figure shows two-dimensional (top) and one-dimensional (bottom) representations of the standing wave inside a microwave oven. Gray represents zero field, and white and black signify the strongest fields, with white being a field that is in the opposite direction compared to black. Compare the probabilities of detecting a microwave photon at points A, B, and C. \(\triangleright\) A and C are both extremes of the wave, so the probabilities of detecting a photon at A and C are equal. It doesn't matter that we have represented C as negative and A as positive, because it is the square of the amplitude that is relevant. The amplitude at B is about 1/2 as much as the others, so the probability of detecting a photon there is about 1/4 as much. Example 11: What is the proportionality constant? \(\triangleright\) What is the proportionality constant that would make an actual equation out of \((\text{probability distribution})\propto(\text{amplitude})^2\)? \(\triangleright\) The probability that the photon is in a certain small region of volume \(v\) should equal the fraction of the wave's energy that is within that volume. For a sinusoidal wave, which has a single, well-defined frequency \(f\), this gives \[\begin{align*} P &= \frac{\text{energy in volume $v$}}{\text{energy of photon}} \\ &= \frac{\text{energy in volume $v$}}{hf} . \end{align*}\] We assume \(v\) is small enough so that the electric and magnetic fields are nearly constant throughout it. We then have \[\begin{equation*} P = \frac{\left(\frac{1}{8\pi k}|\mathbf{E}|^2 +\frac{c^2}{8\pi k}|\mathbf{B}|^2\right)v}{hf} . \end{equation*}\] We can simplify this formidable looking expression by recognizing that in a plane wave, \(|\mathbf{E}|\) and \(|\mathbf{B}|\) are related by \(|\mathbf{E}|=c|\mathbf{B}|\). This implies (problem 40, p. 729), that the electric and magnetic fields each contribute half the total energy, so we can simplify the result to \[\begin{align*} P &= 2\frac{\left(\frac{1}{8\pi k}|\mathbf{E}|^2\right)v}{hf} \\ &= \frac{v}{4\pi khf}|\mathbf{E}|^2 . \end{align*}\] The probability is proportional to the square of the wave's amplitude, as advertised.3 Discussion Questions Can a white photon exist? n / Probability is the volume under a surface defined by \(D(x,y)\). 13.2.4 Photons in three dimensions Up until now I've been sneaky and avoided a full discussion of the three-dimensional aspects of the probability interpretation. The example of the carrot in the microwave oven, for example, reduced to a one-dimensional situation because we were considering three points along the same line and because we were only comparing ratios of probabilities. The purpose of bringing it up now is to head off any feeling that you've been cheated conceptually rather than to prepare you for mathematical problem solving in three dimensions, which would not be appropriate for the level of this course. A typical example of a probability distribution in section 13.1 was the distribution of heights of human beings. The thing that varied randomly, height, \(h\), had units of meters, and the probability distribution was a graph of a function \(D(h)\). The units of the probability distribution had to be \(\text{m}^{-1}\) (inverse meters) so that areas under the curve, interpreted as probabilities, would be unitless: \((\text{area})=(\text{height})(\text{width})=\text{m}^{-1}\cdot\text{m}\). Now suppose we have a two-dimensional problem, e.g., the probability distribution for the place on the surface of a digital camera chip where a photon will be detected. The point where it is detected would be described with two variables, \(x\) and \(y\), each having units of meters. The probability distribution will be a function of both variables, \(D(x,y)\). A probability is now visualized as the volume under the surface described by the function \(D(x,y)\), as shown in figure n. The units of \(D\) must be \(\text{m}^{-2}\) so that probabilities will be unitless: \((\text{probability})=(\text{depth})(\text{length})(\text{width}) =\text{m}^{-2}\cdot\text{m}\cdot\text{m}\). In terms of calculus, we have \(P\:=\:\int Ddx dy\). Generalizing finally to three dimensions, we find by analogy that the probability distribution will be a function of all three coordinates, \(D(x,y,z)\), and will have units of \(\text{m}^{-3}\). It is unfortunately impossible to visualize the graph unless you are a mutant with a natural feel for life in four dimensions. If the probability distribution is nearly constant within a certain volume of space \(v\), the probability that the photon is in that volume is simply \(vD\). If not, then we can use an integral, \(P\:=\:\int Ddx dydz\). 13.3 Matter As a Wave [In] a few minutes I shall be all melted... I have been wicked in my day, but I never thought a little girl like you would ever be able to melt me and end my wicked deeds. Look out --- here I go! -- The Wicked Witch of the West As the Wicked Witch learned the hard way, losing molecular cohesion can be unpleasant. That's why we should be very grateful that the concepts of quantum physics apply to matter as well as light. If matter obeyed the laws of classical physics, molecules wouldn't exist. Consider, for example, the simplest atom, hydrogen. Why does one hydrogen atom form a chemical bond with another hydrogen atom? Roughly speaking, we'd expect a neighboring pair of hydrogen atoms, A and B, to exert no force on each other at all, attractive or repulsive: there are two repulsive interactions (proton A with proton B and electron A with electron B) and two attractive interactions (proton A with electron B and electron A with proton B). Thinking a little more precisely, we should even expect that once the two atoms got close enough, the interaction would be repulsive. For instance, if you squeezed them so close together that the two protons were almost on top of each other, there would be a tremendously strong repulsion between them due to the \(1/r^2\) nature of the electrical force. The repulsion between the electrons would not be as strong, because each electron ranges over a large area, and is not likely to be found right on top of the other electron. Thus hydrogen molecules should not exist according to classical physics. Quantum physics to the rescue! As we'll see shortly, the whole problem is solved by applying the same quantum concepts to electrons that we have already used for photons. b / These two electron waves are not distinguishable by any measuring device. 13.3.1 Electrons as waves We started our journey into quantum physics by studying the random behavior of matter in radioactive decay, and then asked how randomness could be linked to the basic laws of nature governing light. The probability interpretation of wave-particle duality was strange and hard to accept, but it provided such a link. It is now natural to ask whether the same explanation could be applied to matter. If the fundamental building block of light, the photon, is a particle as well as a wave, is it possible that the basic units of matter, such as electrons, are waves as well as particles? A young French aristocrat studying physics, Louis de Broglie (pronounced “broylee”), made exactly this suggestion in his 1923 Ph.D. thesis. His idea had seemed so farfetched that there was serious doubt about whether to grant him the degree. Einstein was asked for his opinion, and with his strong support, de Broglie got his degree. Only two years later, American physicists C.J. Davisson and L. Germer confirmed de Broglie's idea by accident. They had been studying the scattering of electrons from the surface of a sample of nickel, made of many small crystals. (One can often see such a crystalline pattern on a brass doorknob that has been polished by repeated handling.) An accidental explosion occurred, and when they put their apparatus back together they observed something entirely different: the scattered electrons were now creating an interference pattern! This dramatic proof of the wave nature of matter came about because the nickel sample had been melted by the explosion and then resolidified as a single crystal. The nickel atoms, now nicely arranged in the regular rows and columns of a crystalline lattice, were acting as the lines of a diffraction grating. The new crystal was analogous to the type of ordinary diffraction grating in which the lines are etched on the surface of a mirror (a reflection grating) rather than the kind in which the light passes through the transparent gaps between the lines (a transmission grating). a / A double-slit interference pattern made with neutrons. (A. Zeilinger, R. Gähler, C.G. Shull, W. Treimer, and W. Mampe, Reviews of Modern Physics, Vol. 60, 1988.) Although we will concentrate on the wave-particle duality of electrons because it is important in chemistry and the physics of atoms, all the other “particles” of matter you've learned about show wave properties as well. Figure a, for instance, shows a wave interference pattern of neutrons. It might seem as though all our work was already done for us, and there would be nothing new to understand about electrons: they have the same kind of funny wave-particle duality as photons. That's almost true, but not quite. There are some important ways in which electrons differ significantly from photons: 1. Electrons have mass, and photons don't. 2. Photons always move at the speed of light, but electrons can move at any speed less than \(c\). 3. Photons don't have electric charge, but electrons do, so electric forces can act on them. The most important example is the atom, in which the electrons are held by the electric force of the nucleus. 4. Electrons cannot be absorbed or emitted as photons are. Destroying an electron or creating one out of nothing would violate conservation of charge. (In section 13.4 we will learn of one more fundamental way in which electrons differ from photons, for a total of five.) Because electrons are different from photons, it is not immediately obvious which of the photon equations from chapter 11 can be applied to electrons as well. A particle property, the energy of one photon, is related to its wave properties via \(E=hf\) or, equivalently, \(E=hc/\lambda \). The momentum of a photon was given by \(p=hf/c\) or \(p=h/\lambda \). Ultimately it was a matter of experiment to determine which of these equations, if any, would work for electrons, but we can make a quick and dirty guess simply by noting that some of the equations involve \(c\), the speed of light, and some do not. Since \(c\) is irrelevant in the case of an electron, we might guess that the equations of general validity are those that do not have \(c\) in them: \[\begin{align*} E &= hf \\ p &= h/\lambda \end{align*}\] This is essentially the reasoning that de Broglie went through, and experiments have confirmed these two equations for all the fundamental building blocks of light and matter, not just for photons and electrons. The second equation, which I soft-pedaled in the previous chapter, takes on a greater important for electrons. This is first of all because the momentum of matter is more likely to be significant than the momentum of light under ordinary conditions, and also because force is the transfer of momentum, and electrons are affected by electrical forces. Example 12: The wavelength of an elephant \(\triangleright\) What is the wavelength of a trotting elephant? \(\triangleright\) One may doubt whether the equation should be applied to an elephant, which is not just a single particle but a rather large collection of them. Throwing caution to the wind, however, we estimate the elephant's mass at \(10^3\) kg and its trotting speed at 10 m/s. Its wavelength is therefore roughly \[\begin{align*} \lambda &= \frac{h}{p} \\ &= \frac{h}{mv} \\ &= \frac{6.63\times10^{-34}\ \text{J}\!\cdot\!\text{s}}{(10^3\ \text{kg})(10\ \text{m}/\text{s})} \\ &\sim 10^{-37}\ \frac{\left(\text{kg}\!\cdot\!\text{m}^2/\text{s}^2\right)\!\cdot\!\text{s}}{\text{kg}\!\cdot\!\text{m}/\text{s}} \\ &= 10^{-37}\ \text{m} \end{align*}\] The wavelength found in this example is so fantastically small that we can be sure we will never observe any measurable wave phenomena with elephants or any other human-scale objects. The result is numerically small because Planck's constant is so small, and as in some examples encountered previously, this smallness is in accord with the correspondence principle. Although a smaller mass in the equation \(\lambda =h/mv\) does result in a longer wavelength, the wavelength is still quite short even for individual electrons under typical conditions, as shown in the following example. Example 13: The typical wavelength of an electron \(\triangleright\) Electrons in circuits and in atoms are typically moving through voltage differences on the order of 1 V, so that a typical energy is \((e)(1\ \text{V})\), which is on the order of \(10^{-19}\ \text{J}\). What is the wavelength of an electron with this amount of kinetic energy? \(\triangleright\) This energy is nonrelativistic, since it is much less than \(mc^2\). Momentum and energy are therefore related by the nonrelativistic equation \(K=p^2/2m\). Solving for \(p\) and substituting in to the equation for the wavelength, we find \[\begin{align*} \lambda &= \frac{h}{\sqrt{2mK}} \\ &= 1.6\times10^{-9}\ \text{m} . \end{align*}\] This is on the same order of magnitude as the size of an atom, which is no accident: as we will discuss in the next chapter in more detail, an electron in an atom can be interpreted as a standing wave. The smallness of the wavelength of a typical electron also helps to explain why the wave nature of electrons wasn't discovered until a hundred years after the wave nature of light. To scale the usual wave-optics devices such as diffraction gratings down to the size needed to work with electrons at ordinary energies, we need to make them so small that their parts are comparable in size to individual atoms. This is essentially what Davisson and Germer did with their nickel crystal. These remarks about the inconvenient smallness of electron wavelengths apply only under the assumption that the electrons have typical energies. What kind of energy would an electron have to have in order to have a longer wavelength that might be more convenient to work with? What kind of wave is it? If a sound wave is a vibration of matter, and a photon is a vibration of electric and magnetic fields, what kind of a wave is an electron made of? The disconcerting answer is that there is no experimental “observable,” i.e., directly measurable quantity, to correspond to the electron wave itself. In other words, there are devices like microphones that detect the oscillations of air pressure in a sound wave, and devices such as radio receivers that measure the oscillation of the electric and magnetic fields in a light wave, but nobody has ever found any way to measure the electron wave directly. We can of course detect the energy (or momentum) possessed by an electron just as we could detect the energy of a photon using a digital camera. (In fact I'd imagine that an unmodified digital camera chip placed in a vacuum chamber would detect electrons just as handily as photons.) But this only allows us to determine where the wave carries high probability and where it carries low probability. Probability is proportional to the square of the wave's amplitude, but measuring its square is not the same as measuring the wave itself. In particular, we get the same result by squaring either a positive number or its negative, so there is no way to determine the positive or negative sign of an electron wave. Most physicists tend toward the school of philosophy known as operationalism, which says that a concept is only meaningful if we can define some set of operations for observing, measuring, or testing it. According to a strict operationalist, then, the electron wave itself is a meaningless concept. Nevertheless, it turns out to be one of those concepts like love or humor that is impossible to measure and yet very useful to have around. We therefore give it a symbol, \(\Psi \) (the capital Greek letter psi), and a special name, the electron wavefunction (because it is a function of the coordinates \(x\), \(y\), and \(z\) that specify where you are in space). It would be impossible, for example, to calculate the shape of the electron wave in a hydrogen atom without having some symbol for the wave. But when the calculation produces a result that can be compared directly to experiment, the final algebraic result will turn out to involve only \(\Psi^2\), which is what is observable, not \(\Psi \) itself. Since \(\Psi \), unlike \(E\) and \(B\), is not directly measurable, we are free to make the probability equations have a simple form: instead of having the probability density equal to some funny constant multiplied by \(\Psi^2\), we simply define \(\Psi \) so that the constant of proportionality is one: \[\begin{equation*} (\text{probability distribution}) = \Psi ^2 . \end{equation*}\] Since the probability distribution has units of \(\text{m}^{-3}\), the units of \(\Psi \) must be \(\text{m}^{-3/2}\). Discussion Question Frequency is oscillations per second, whereas wavelength is meters per oscillation. How could the equations \(E=hf\) and \(p=h/\lambda\) be made to look more alike by using quantities that were more closely analogous? (This more symmetric treatment makes it easier to incorporate relativity into quantum mechanics, since relativity says that space and time are not entirely separate.) c / Part of an infinite sine wave. d / A finite-length sine wave. e / A beat pattern created by superimposing two sine waves with slightly different wavelengths. 13.3.2 Dispersive waves A colleague of mine who teaches chemistry loves to tell the story about an exceptionally bright student who, when told of the equation \(p=h/\lambda \), protested, “But when I derived it, it had a factor of 2!” The issue that's involved is a real one, albeit one that could be glossed over (and is, in most textbooks) without raising any alarms in the mind of the average student. The present optional section addresses this point; it is intended for the student who wishes to delve a little deeper. Here's how the now-legendary student was presumably reasoning. We start with the equation \(v=f\lambda \), which is valid for any sine wave, whether it's quantum or classical. Let's assume we already know \(E=hf\), and are trying to derive the relationship between wavelength and momentum: \[\begin{align*} \lambda &= \frac{v}{f} \\ &= \frac{vh}{E} \\ &= \frac{vh}{\frac{1}{2}mv^2} \\ &= \frac{2h}{mv} \\ &= \frac{2h}{p} . \end{align*}\] The reasoning seems valid, but the result does contradict the accepted one, which is after all solidly based on experiment. The mistaken assumption is that we can figure everything out in terms of pure sine waves. Mathematically, the only wave that has a perfectly well defined wavelength and frequency is a sine wave, and not just any sine wave but an infinitely long sine wave, c. The unphysical thing about such a wave is that it has no leading or trailing edge, so it can never be said to enter or leave any particular region of space. Our derivation made use of the velocity, \(v\), and if velocity is to be a meaningful concept, it must tell us how quickly stuff (mass, energy, momentum, ...) is transported from one region of space to another. Since an infinitely long sine wave doesn't remove any stuff from one region and take it to another, the “velocity of its stuff” is not a well defined concept. Of course the individual wave peaks do travel through space, and one might think that it would make sense to associate their speed with the “speed of stuff,” but as we will see, the two velocities are in general unequal when a wave's velocity depends on wavelength. Such a wave is called a dispersive wave, because a wave pulse consisting of a superposition of waves of different wavelengths will separate (disperse) into its separate wavelengths as the waves move through space at different speeds. Nearly all the waves we have encountered have been nondispersive. For instance, sound waves and light waves (in a vacuum) have speeds independent of wavelength. A water wave is one good example of a dispersive wave. Long-wavelength water waves travel faster, so a ship at sea that encounters a storm typically sees the long-wavelength parts of the wave first. When dealing with dispersive waves, we need symbols and words to distinguish the two speeds. The speed at which wave peaks move is called the phase velocity, \(v_p\), and the speed at which “stuff” moves is called the group velocity, \(v_g\). An infinite sine wave can only tell us about the phase velocity, not the group velocity, which is really what we would be talking about when we refer to the speed of an electron. If an infinite sine wave is the simplest possible wave, what's the next best thing? We might think the runner up in simplicity would be a wave train consisting of a chopped-off segment of a sine wave, d. However, this kind of wave has kinks in it at the end. A simple wave should be one that we can build by superposing a small number of infinite sine waves, but a kink can never be produced by superposing any number of infinitely long sine waves. Actually the simplest wave that transports stuff from place to place is the pattern shown in figure e. Called a beat pattern, it is formed by superposing two sine waves whose wavelengths are similar but not quite the same. If you have ever heard the pulsating howling sound of musicians in the process of tuning their instruments to each other, you have heard a beat pattern. The beat pattern gets stronger and weaker as the two sine waves go in and out of phase with each other. The beat pattern has more “stuff” (energy, for example) in the areas where constructive interference occurs, and less in the regions of cancellation. As the whole pattern moves through space, stuff is transported from some regions and into other ones. If the frequency of the two sine waves differs by 10%, for instance, then ten periods will be occur between times when they are in phase. Another way of saying it is that the sinusoidal “envelope” (the dashed lines in figure e) has a frequency equal to the difference in frequency between the two waves. For instance, if the waves had frequencies of 100 Hz and 110 Hz, the frequency of the envelope would be 10 Hz. To apply similar reasoning to the wavelength, we must define a quantity \(z=1/\lambda \) that relates to wavelength in the same way that frequency relates to period. In terms of this new variable, the \(z\) of the envelope equals the difference between the \(z's\) of the two sine waves. The group velocity is the speed at which the envelope moves through space. Let \(\Delta f\) and \(\Delta z\) be the differences between the frequencies and \(z's\) of the two sine waves, which means that they equal the frequency and \(z\) of the envelope. The group velocity is \(v_g=f_{envelope}\lambda_{envelope}=\Delta f/\Delta \)z. If \(\Delta f\) and \(\Delta z\) are sufficiently small, we can approximate this expression as a derivative, \[\begin{equation*} v_g = \frac{df}{dz} . \end{equation*}\] This expression is usually taken as the definition of the group velocity for wave patterns that consist of a superposition of sine waves having a narrow range of frequencies and wavelengths. In quantum mechanics, with \(f=E/h\) and \(z=p/h\), we have \(v_g=dE/dp\). In the case of a nonrelativistic electron the relationship between energy and momentum is \(E=p^2/2m\), so the group velocity is \(dE/dp=p/m=v\), exactly what it should be. It is only the phase velocity that differs by a factor of two from what we would have expected, but the phase velocity is not the physically important thing. f / Three possible standing-wave patterns for a particle in a box. g / The spectrum of the light from the star Sirius. h / Two hydrogen atoms bond to form an \(\text{H}_2\) molecule. In the molecule, the two electrons' wave patterns overlap , and are about twice as wide. 13.3.3 Bound states Electrons are at their most interesting when they're in atoms, that is, when they are bound within a small region of space. We can understand a great deal about atoms and molecules based on simple arguments about such bound states, without going into any of the realistic details of atom. The simplest model of a bound state is known as the particle in a box: like a ball on a pool table, the electron feels zero force while in the interior, but when it reaches an edge it encounters a wall that pushes back inward on it with a large force. In particle language, we would describe the electron as bouncing off of the wall, but this incorrectly assumes that the electron has a certain path through space. It is more correct to describe the electron as a wave that undergoes 100% reflection at the boundaries of the box. Like a generation of physics students before me, I rolled my eyes when initially introduced to the unrealistic idea of putting a particle in a box. It seemed completely impractical, an artificial textbook invention. Today, however, it has become routine to study electrons in rectangular boxes in actual laboratory experiments. The “box” is actually just an empty cavity within a solid piece of silicon, amounting in volume to a few hundred atoms. The methods for creating these electron-in-a-box setups (known as “quantum dots”) were a by-product of the development of technologies for fabricating computer chips. For simplicity let's imagine a one-dimensional electron in a box, i.e., we assume that the electron is only free to move along a line. The resulting standing wave patterns, of which the first three are shown in the figure, are just like some of the patterns we encountered with sound waves in musical instruments. The wave patterns must be zero at the ends of the box, because we are assuming the walls are impenetrable, and there should therefore be zero probability of finding the electron outside the box. Each wave pattern is labeled according to \(n\), the number of peaks and valleys it has. In quantum physics, these wave patterns are referred to as “states” of the particle-in-the-box system. The following seemingly innocuous observations about the particle in the box lead us directly to the solutions to some of the most vexing failures of classical physics: The particle's energy is quantized (can only have certain values). Each wavelength corresponds to a certain momentum, and a given momentum implies a definite kinetic energy, \(E=p^2/2m\). (This is the second type of energy quantization we have encountered. The type we studied previously had to do with restricting the number of particles to a whole number, while assuming some specific wavelength and energy for each particle. This type of quantization refers to the energies that a single particle can have. Both photons and matter particles demonstrate both types of quantization under the appropriate circumstances.) The particle has a minimum kinetic energy. Long wavelengths correspond to low momenta and low energies. There can be no state with an energy lower than that of the \(n=1\) state, called the ground state. The smaller the space in which the particle is confined, the higher its kinetic energy must be. Again, this is because long wavelengths give lower energies. Example 14: Spectra of thin gases A fact that was inexplicable by classical physics was that thin gases absorb and emit light only at certain wavelengths. This was observed both in earthbound laboratories and in the spectra of stars. The figure on the left shows the example of the spectrum of the star Sirius, in which there are “gap teeth” at certain wavelengths. Taking this spectrum as an example, we can give a straightforward explanation using quantum physics. Example 15: The stability of atoms In many Star Trek episodes the Enterprise, in orbit around a planet, suddenly lost engine power and began spiraling down toward the planet's surface. This was utter nonsense, of course, due to conservation of energy: the ship had no way of getting rid of energy, so it did not need the engines to replenish it. Consider, however, the electron in an atom as it orbits the nucleus. The electron does have a way to release energy: it has an acceleration due to its continuously changing direction of motion, and according to classical physics, any accelerating charged particle emits electromagnetic waves. According to classical physics, atoms should collapse! The solution lies in the observation that a bound state has a minimum energy. An electron in one of the higher-energy atomic states can and does emit photons and hop down step by step in energy. But once it is in the ground state, it cannot emit a photon because there is no lower-energy state for it to go to. Example 16: Chemical bonds I began this section with a classical argument that chemical bonds, as in an \(\text{H}_2\) molecule, should not exist. Quantum physics explains why this type of bonding does in fact occur. When the atoms are next to each other, the electrons are shared between them. The “box” is about twice as wide, and a larger box allows a smaller energy. Energy is required in order to separate the atoms. (A qualitatively different type of bonding is discussed on page 897. Example 23 on page 893 revisits the \(\text{H}_2\) bond in more detail.) Discussion Questions Neutrons attract each other via the strong nuclear force, so according to classical physics it should be possible to form nuclei out of clusters of two or more neutrons, with no protons at all. Experimental searches, however, have failed to turn up evidence of a stable two-neutron system (dineutron) or larger stable clusters. These systems are apparently not just unstable in the sense of being able to beta decay but unstable in the sense that they don't hold together at all. Explain based on quantum physics why a dineutron might spontaneously fly apart. The following table shows the energy gap between the ground state and the first excited state for four nuclei, in units of picojoules. (The nuclei were chosen to be ones that have similar structures, e.g., they are all spherical in shape.) nucleus energy gap (picojoules) 4textupHe 3.234 16textupO 0.968 40textupCa 0.536 208textupPb 0.418 Explain the trend in the data. i / Werner Heisenberg (1901-1976). Heisenberg helped to develop the foundations of quantum mechanics, including the Heisenberg uncertainty principle. He was the scientific leader of the Nazi atomic-bomb program up until its cancellation in 1942, when the military decided that it was too ambitious a project to undertake in wartime, and too unlikely to produce results. 13.3.4 The uncertainty principle and measurement Eliminating randomness through measurement? A common reaction to quantum physics, among both early-twentieth-century physicists and modern students, is that we should be able to get rid of randomness through accurate measurement. If I say, for example, that it is meaningless to discuss the path of a photon or an electron, one might suggest that we simply measure the particle's position and velocity many times in a row. This series of snapshots would amount to a description of its path. A practical objection to this plan is that the process of measurement will have an effect on the thing we are trying to measure. This may not be of much concern, for example, when a traffic cop measure's your car's motion with a radar gun, because the energy and momentum of the radar pulses are insufficient to change the car's motion significantly. But on the subatomic scale it is a very real problem. Making a videotape through a microscope of an electron orbiting a nucleus is not just difficult, it is theoretically impossible. The video camera makes pictures of things using light that has bounced off them and come into the camera. If even a single photon of visible light was to bounce off of the electron we were trying to study, the electron's recoil would be enough to change its behavior significantly. The Heisenberg uncertainty principle This insight, that measurement changes the thing being measured, is the kind of idea that clove-cigarette-smoking intellectuals outside of the physical sciences like to claim they knew all along. If only, they say, the physicists had made more of a habit of reading literary journals, they could have saved a lot of work. The anthropologist Margaret Mead has recently been accused of inadvertently encouraging her teenaged Samoan informants to exaggerate the freedom of youthful sexual experimentation in their society. If this is considered a damning critique of her work, it is because she could have done better: other anthropologists claim to have been able to eliminate the observer-as-participant problem and collect untainted data. The German physicist Werner Heisenberg, however, showed that in quantum physics, any measuring technique runs into a brick wall when we try to improve its accuracy beyond a certain point. Heisenberg showed that the limitation is a question of what there is to be known, even in principle, about the system itself, not of the ability or inability of a specific measuring device to ferret out information that is knowable but not previously hidden. Suppose, for example, that we have constructed an electron in a box (quantum dot) setup in our laboratory, and we are able to adjust the length \(L\) of the box as desired. All the standing wave patterns pretty much fill the box, so our knowledge of the electron's position is of limited accuracy. If we write \(\Delta x\) for the range of uncertainty in our knowledge of its position, then \(\Delta x\) is roughly the same as the length of the box: \[\begin{equation*} \Delta x \approx L \end{equation*}\] If we wish to know its position more accurately, we can certainly squeeze it into a smaller space by reducing \(L\), but this has an unintended side-effect. A standing wave is really a superposition of two traveling waves going in opposite directions. The equation \(p=h/\lambda \) really only gives the magnitude of the momentum vector, not its direction, so we should really interpret the wave as a 50/50 mixture of a right-going wave with momentum \(p=h/\lambda \) and a left-going one with momentum \(p=-h/\lambda \). The uncertainty in our knowledge of the electron's momentum is \(\Delta p=2h/\lambda\), covering the range between these two values. Even if we make sure the electron is in the ground state, whose wavelength \(\lambda =2L\) is the longest possible, we have an uncertainty in momentum of \(\Delta p=h/L\). In general, we find \[\begin{equation*} \Delta p \gtrsim h/L , \end{equation*}\] with equality for the ground state and inequality for the higher-energy states. Thus if we reduce \(L\) to improve our knowledge of the electron's position, we do so at the cost of knowing less about its momentum. This trade-off is neatly summarized by multiplying the two equations to give \[\begin{equation*} \Delta p\Delta x \gtrsim h . \end{equation*}\] Although we have derived this in the special case of a particle in a box, it is an example of a principle of more general validity: The Heisenberg uncertainty principle It is not possible, even in principle, to know the momentum and the position of a particle simultaneously and with perfect accuracy. The uncertainties in these two quantities are always such that \(\Delta p\Delta x \gtrsim h\). (This approximation can be made into a strict inequality, \(\Delta p\Delta x>h/4\pi\), but only with more careful definitions, which we will not bother with.) Note that although I encouraged you to think of this derivation in terms of a specific real-world system, the quantum dot, no reference was ever made to any specific laboratory equipment or procedures. The argument is simply that we cannot know the particle's position very accurately unless it has a very well defined position, it cannot have a very well defined position unless its wave-pattern covers only a very small amount of space, and its wave-pattern cannot be thus compressed without giving it a short wavelength and a correspondingly uncertain momentum. The uncertainty principle is therefore a restriction on how much there is to know about a particle, not just on what we can know about it with a certain technique. Example 17: An estimate for electrons in atoms \(\triangleright\) A typical energy for an electron in an atom is on the order of \((\text{1 volt})\cdot e\), which corresponds to a speed of about 1% of the speed of light. If a typical atom has a size on the order of 0.1 nm, how close are the electrons to the limit imposed by the uncertainty principle? \(\triangleright\) If we assume the electron moves in all directions with equal probability, the uncertainty in its momentum is roughly twice its typical momentum. This only an order-of-magnitude estimate, so we take \(\Delta p\) to be the same as a typical momentum: \[\begin{align*} \Delta p \Delta x &= p_{typical} \Delta x \\ &= (m_{electron}) (0.01c) (0.1\times10^{-9}\ \text{m}) \\ &= 3\times 10^{-34}\ \text{J}\!\cdot\!\text{s} \end{align*}\] This is on the same order of magnitude as Planck's constant, so evidently the electron is “right up against the wall.” (The fact that it is somewhat less than \(h\) is of no concern since this was only an estimate, and we have not stated the uncertainty principle in its most exact form.) Measurement and Schrödinger's cat On p. 853 I briefly mentioned an issue concerning measurement that we are now ready to address carefully. If you hang around a laboratory where quantum-physics experiments are being done and secretly record the physicists' conversations, you'll hear them say many things that assume the probability interpretation of quantum mechanics. Usually they will speak as though the randomness of quantum mechanics enters the picture when something is measured. In the digital camera experiments of section 13.2, for example, they would casually describe the detection of a photon at one of the pixels as if the moment of detection was when the photon was forced to “make up its mind.” Although this mental cartoon usually works fairly well as a description of things they experience in the lab, it cannot ultimately be correct, because it attributes a special role to measurement, which is really just a physical process like all other physical processes.4 If we are to find an interpretation that avoids giving any special role to measurement processes, then we must think of the entire laboratory, including the measuring devices and the physicists themselves, as one big quantum-mechanical system made out of protons, neutrons, electrons, and photons. In other words, we should take quantum physics seriously as a description not just of microscopic objects like atoms but of human-scale (“macroscopic”) things like the apparatus, the furniture, and the people. The most celebrated example is called the Schrödinger's cat experiment. Luckily for the cat, there probably was no actual experiment --- it was simply a “thought experiment” that the German theorist Schrödinger discussed with his colleagues. Schrödinger wrote: One can even construct quite burlesque cases. A cat is shut up in a steel container, together with the following diabolical apparatus (which one must keep out of the direct clutches of the cat): In a Geiger tube [radiation detector] there is a tiny mass of radioactive substance, so little that in the course of an hour perhaps one atom of it disintegrates, but also with equal probability not even one; if it does happen, the counter [detector] responds and ... activates a hammer that shatters a little flask of prussic acid [filling the chamber with poison gas]. If one has left this entire system to itself for an hour, then one will say to himself that the cat is still living, if in that time no atom has disintegrated. The first atomic disintegration would have poisoned it. Now comes the strange part. Quantum mechanics describes the particles the cat is made of as having wave properties, including the property of superposition. Schrödinger describes the wavefunction of the box's contents at the end of the hour: The wavefunction of the entire system would express this situation by having the living and the dead cat mixed ... in equal parts [50/50 proportions]. The uncertainty originally restricted to the atomic domain has been transformed into a macroscopic uncertainty... At first Schrödinger's description seems like nonsense. When you opened the box, would you see two ghostlike cats, as in a doubly exposed photograph, one dead and one alive? Obviously not. You would have a single, fully material cat, which would either be dead or very, very upset. But Schrödinger has an equally strange and logical answer for that objection. In the same way that the quantum randomness of the radioactive atom spread to the cat and made its wavefunction a random mixture of life and death, the randomness spreads wider once you open the box, and your own wavefunction becomes a mixture of a person who has just killed a cat and a person who hasn't.5 Discussion Questions Compare \(\Delta p\) and \(\Delta x\) for the two lowest energy levels of the one-dimensional particle in a box, and discuss how this relates to the uncertainty principle. On a graph of \(\Delta p\) versus \(\Delta \)x, sketch the regions that are allowed and forbidden by the Heisenberg uncertainty principle. Interpret the graph: Where does an atom lie on it? An elephant? Can either \(p\) or \(x\) be measured with perfect accuracy if we don't care about the other? j / An electron in a gentle electric field gradually shortens its wavelength as it gains energy. k / The wavefunction's tails go where classical physics says they shouldn't. 13.3.5 Electrons in electric fields So far the only electron wave patterns we've considered have been simple sine waves, but whenever an electron finds itself in an electric field, it must have a more complicated wave pattern. Let's consider the example of an electron being accelerated by the electron gun at the back of a TV tube. Newton's laws are not useful, because they implicitly assume that the path taken by the particle is a meaningful concept. Conservation of energy is still valid in quantum physics, however. In terms of energy, the electron is moving from a region of low voltage into a region of higher voltage. Since its charge is negative, it loses electrical energy by moving to a higher voltage, so its kinetic energy increases. As its electrical energy goes down, its kinetic energy goes up by an equal amount, keeping the total energy constant. Increasing kinetic energy implies a growing momentum, and therefore a shortening wavelength, j. The wavefunction as a whole does not have a single well-defined wavelength, but the wave changes so gradually that if you only look at a small part of it you can still pick out a wavelength and relate it to the momentum and energy. (The picture actually exaggerates by many orders of magnitude the rate at which the wavelength changes.) But what if the electric field was stronger? The electric field in a TV is only \(\sim10^5\) N/C, but the electric field within an atom is more like \(10^{12}\) N/C. In figure l, the wavelength changes so rapidly that there is nothing that looks like a sine wave at all. We could get a rough idea of the wavelength in a given region by measuring the distance between two peaks, but that would only be a rough approximation. Suppose we want to know the wavelength at point \(P\). The trick is to construct a sine wave, like the one shown with the dashed line, which matches the curvature of the actual wavefunction as closely as possible near \(P\). The sine wave that matches as well as possible is called the “osculating” curve, from a Latin word meaning “to kiss.” The wavelength of the osculating curve is the wavelength that will relate correctly to conservation of energy. l / A typical wavefunction of an electron in an atom (heavy curve) and the osculating sine wave (dashed curve) that matches its curvature at point P. We implicitly assumed that the particle-in-a-box wavefunction would cut off abruptly at the sides of the box, k/1, but that would be unphysical. A kink has infinite curvature, and curvature is related to energy, so it can't be infinite. A physically realistic wavefunction must always “tail off” gradually, k/2. In classical physics, a particle can never enter a region in which its interaction energy \(U\) would be greater than the amount of energy it has available. But in quantum physics the wavefunction will always have a tail that reaches into the classically forbidden region. If it was not for this effect, called tunneling, the fusion reactions that power the sun would not occur due to the high electrical energy nuclei need in order to get close together! Tunneling is discussed in more detail in the following subsection. m / Tunneling through a barrier. n / The electrical, nuclear, and total interaction energies for an alpha particle escaping from a nucleus. o / A particle encounters a step of height \(U\ltE\) in the interaction energy. Both sides are classically allowed. A reflected wave exists, but is not shown in the figure. p / The marble has zero probability of being reflected from the edge of the table. (This example has \(U\lt0\), not \(U>0\) as in figures o and q). q / Making the step more gradual reduces the probability of reflection. r / 1. The one-dimensional version of the Laplacian is the second derivative. It is positive here because the average of the two nearby points is greater than the value at the center. 2. The Laplacian of the function \(A\) in example 20 is positive because the average of the four nearby points along the perpendicular axes is greater than the function's value at the center. 3. \(\nabla^2 C=0\). The average is the same as the value at the center. 13.3.6 The Schrödinger equation In subsection 13.3.5 we were able to apply conservation of energy to an electron's wavefunction, but only by using the clumsy graphical technique of osculating sine waves as a measure of the wave's curvature. You have learned a more convenient measure of curvature in calculus: the second derivative. To relate the two approaches, we take the second derivative of a sine wave: \[\begin{align*} \frac{d^2}{dx^2}\sin(2\pi x/\lambda) &= \frac{d}{dx}\left(\frac{2\pi}{\lambda}\cos\frac{2\pi x}{\lambda}\right) \\ &= -\left(\frac{2\pi}{\lambda}\right)^2 \sin\frac{2\pi x}{\lambda} \end{align*}\] Taking the second derivative gives us back the same function, but with a minus sign and a constant out in front that is related to the wavelength. We can thus relate the second derivative to the osculating wavelength: \[\begin{equation*} \frac{d^2\Psi}{dx^2} = -\left(\frac{2\pi}{\lambda}\right)^2\Psi \end{equation*}\] This could be solved for \(\lambda \) in terms of \(\Psi \), but it will turn out below to be more convenient to leave it in this form. Applying this to conservation of energy, we have \[\begin{align*} \begin{split} E &= K + U \\ &= \frac{p^2}{2m} + U \\ &= \frac{(h/\lambda)^2}{2m} + U \end{split} \end{align*}\] Note that both equation \eqref{eq:schreqna} and equation \eqref{eq:schreqnb} have \(\lambda^2\) in the denominator. We can simplify our algebra by multiplying both sides of equation \eqref{eq:schreqnb} by \(\Psi \) to make it look more like equation \eqref{eq:schreqna}: \[\begin{align*} E \cdot \Psi &= \frac{(h/\lambda)^2}{2m}\Psi + U \cdot \Psi \\ &= \frac{1}{2m}\left(\frac{h}{2\pi}\right)^2\left(\frac{2\pi}{\lambda}\right)^2\Psi + U \cdot \Psi \\ &= -\frac{1}{2m}\left(\frac{h}{2\pi}\right)^2 \frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{align*}\] Further simplification is achieved by using the symbol \(\hbar\) (\(h\) with a slash through it, read “h-bar”) as an abbreviation for \(h/2\pi \). We then have the important result known as the \labelimportantintext{Schrödinger equation}: \[\begin{equation*} E \cdot \Psi = -\frac{\hbar^2}{2m}\frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{equation*}\] (Actually this is a simplified version of the Schrödinger equation, applying only to standing waves in one dimension.) Physically it is a statement of conservation of energy. The total energy \(E\) must be constant, so the equation tells us that a change in interaction energy \(U\) must be accompanied by a change in the curvature of the wavefunction. This change in curvature relates to a change in wavelength, which corresponds to a change in momentum and kinetic energy. Considering the assumptions that were made in deriving the Schrödinger equation, would it be correct to apply it to a photon? To an electron moving at relativistic speeds? Usually we know right off the bat how \(U\) depends on \(x\), so the basic mathematical problem of quantum physics is to find a function \(\Psi (x\)) that satisfies the Schrödinger equation for a given interaction-energy function \(U(x)\). An equation, such as the Schrödinger equation, that specifies a relationship between a function and its derivatives is known as a differential equation. The detailed study of the solution of the Schrödinger equation is beyond the scope of this book, but we can gain some important insights by considering the easiest version of the Schrödinger equation, in which the interaction energy \(U\) is constant. We can then rearrange the Schrödinger equation as follows: \[\begin{align*} \frac{d^2\Psi}{dx^2} &= \frac{2m(U-E)}{\hbar^2} \Psi , \text{which boils down to} \frac{d^2\Psi}{dx^2} &= a\Psi , \end{align*}\] where, according to our assumptions, \(a\) is independent of \(x\). We need to find a function whose second derivative is the same as the original function except for a multiplicative constant. The only functions with this property are sine waves and exponentials: \[\begin{align*} \frac{d^2}{dx^2}\left[\:q\sin(rx+s)\:\right] &= -qr^2\sin(rx+s) \\ \frac{d^2}{dx^2}\left[qe^{rx+s}\right] &= qr^2e^{rx+s} \end{align*}\] The sine wave gives negative values of \(a\), \(a=-r^2\), and the exponential gives positive ones, \(a=r^2\). The former applies to the classically allowed region with \(U\ltE\). This leads us to a quantitative calculation of the tunneling effect discussed briefly in the preceding subsection. The wavefunction evidently tails off exponentially in the classically forbidden region. Suppose, as shown in figure m, a wave-particle traveling to the right encounters a barrier that it is classically forbidden to enter. Although the form of the Schrödinger equation we're using technically does not apply to traveling waves (because it makes no reference to time), it turns out that we can still use it to make a reasonable calculation of the probability that the particle will make it through the barrier. If we let the barrier's width be \(w\), then the ratio of the wavefunction on the left side of the barrier to the wavefunction on the right is \[\begin{equation*} \frac{qe^{rx+s}}{qe^{r(x+w)+s}} = e^{-rw} . \end{equation*}\] Probabilities are proportional to the squares of wavefunctions, so the probability of making it through the barrier is \[\begin{align*} P &= e^{-2rw} \\ &= \exp\left(-\frac{2w}{\hbar}\sqrt{2m(U-E)}\right) \end{align*}\] If we were to apply this equation to find the probability that a person can walk through a wall, what would the small value of Planck's constant imply? Example 18: Tunneling in alpha decay Naively, we would expect alpha decay to be a very fast process. The typical speeds of neutrons and protons inside a nucleus are extremely high (see problem 20). If we imagine an alpha particle coalescing out of neutrons and protons inside the nucleus, then at the typical speeds we're talking about, it takes a ridiculously small amount of time for them to reach the surface and try to escape. Clattering back and forth inside the nucleus, we could imagine them making a vast number of these “escape attempts” every second. Consider figure n, however, which shows the interaction energy for an alpha particle escaping from a nucleus. The electrical energy is \(kq_1q_2/r\) when the alpha is outside the nucleus, while its variation inside the nucleus has the shape of a parabola, as a consequence of the shell theorem. The nuclear energy is constant when the alpha is inside the nucleus, because the forces from all the neighboring neutrons and protons cancel out; it rises sharply near the surface, and flattens out to zero over a distance of \(\sim 1\) fm, which is the maximum distance scale at which the strong force can operate. There is a classically forbidden region immediately outside the nucleus, so the alpha particle can only escape by quantum mechanical tunneling. (It's true, but somewhat counterintuitive, that a repulsive electrical force can make it more difficult for the alpha to get out.) In reality, alpha-decay half-lives are often extremely long --- sometimes billions of years --- because the tunneling probability is so small. Although the shape of the barrier is not a rectangle, the equation for the tunneling probability on page 876 can still be used as a rough guide to our thinking. Essentially the tunneling probability is so small because \(U-E\) is fairly big, typically about 30 MeV at the peak of the barrier. Example 19: The correspondence principle for \(E>U\) The correspondence principle demands that in the classical limit \(h\rightarrow0\), we recover the correct result for a particle encountering a barrier \(U\), for both \(E\ltU\) and \(E>U\). The \(E\ltU\) case was analyzed in self-check H on p. 876. In the remainder of this example, we analyze \(E>U\), which turns out to be a little trickier. The particle has enough energy to get over the barrier, and the classical result is that it continues forward at a different speed (a reduced speed if \(U>0\), or an increased one if \(U\lt0\)), then regains its original speed as it emerges from the other side. What happens quantum-mechanically in this case? We would like to get a “tunneling” probability of 1 in the classical limit. The expression derived on p. 876, however, doesn't apply here, because it was derived under the assumption that the wavefunction inside the barrier was an exponential; in the classically allowed case, the barrier isn't classically forbidden, and the wavefunction inside it is a sine wave. We can simplify things a little by letting the width \(w\) of the barrier go to infinity. Classically, after all, there is no possibility that the particle will turn around, no matter how wide the barrier. We then have the situation shown in figure o. The analysis is the same as for any other wave being partially reflected at the boundary between two regions where its velocity differs, and the result is the same as the one found on p. 367. The ratio of the amplitude of the reflected wave to that of the incident wave is \(R = (v_2-v_1)/(v_2+v_1)\). The probability of reflection is \(R^2\). (Counterintuitively, \(R^2\) is nonzero even if \(U\lt0\), i.e., \(v_2>v_1\).) This seems to violate the correspondence principle. There is no \(m\) or \(h\) anywhere in the result, so we seem to have the result that, even classically, the marble in figure p can be reflected! The solution to this paradox is that the step in figure o was taken to be completely abrupt --- an idealized mathematical discontinuity. Suppose we make the transition a little more gradual, as in figure q. As shown in problem 17 on p. 380, this reduces the amplitude with which a wave is reflected. By smoothing out the step more and more, we continue to reduce the probability of reflection, until finally we arrive at a barrier shaped like a smooth ramp. More detailed calculations show that this results in zero reflection in the limit where the width of the ramp is large compared to the wavelength. Three dimensions For simplicity, we've been considering the Schrödinger equation in one dimension, so that \(\Psi\) is only a function of \(x\), and has units of \(\text{m}^{-1/2}\) rather than \(\text{m}^{-3/2}\). Since the Schrödinger equation is a statement of conservation of energy, and energy is a scalar, the generalization to three dimensions isn't particularly complicated. The total energy term \(E\cdot\Psi\) and the interaction energy term \(U\cdot\Psi\) involve nothing but scalars, and don't need to be changed at all. In the kinetic energy term, however, we're essentially basing our computation of the kinetic energy on the squared magnitude of the momentum, \(p_x^2\), and in three dimensions this would clearly have to be generalized to \(p_x^2+p_y^2+p_z^2\). The obvious way to achieve this is to replace the second derivative \(d^2\Psi/dx^2\) with the sum \(\partial^2\Psi/\partial x^2+ \partial^2\Psi/\partial y^2+ \partial^2\Psi/\partial z^2\). Here the partial derivative symbol \(\partial\), introduced on page 216, indicates that when differentiating with respect to a particular variable, the other variables are to be considered as constants. This operation on the function \(\Psi\) is notated \(\nabla^2\Psi\), and the derivative-like operator \(\nabla^2=\partial^2/\partial x^2+ \partial^2/\partial y^2+ \partial^2/\partial z^2\) is called the Laplacian. It occurs elswehere in physics. For example, in classical electrostatics, the voltage in a region of vacuum must be a solution of the equation \(\nabla^2V=0\). Like the second derivative, the Laplacian is essentially a measure of curvature. Or, as shown in figure r, we can think of it as a measure of how much the value of a function at a certain point differs from the average of its value on nearby points. Example 20: Examples of the Laplacian in two dimensions \(\triangleright\) Compute the Laplacians of the following functions in two dimensions, and interpret them: \(A=x^2+y^2\), \(B=-x^2-y^2\), \(C=x^2-y^2\). \(\triangleright\) The first derivative of function \(A\) with respect to \(x\) is \(\partial A/\partial x=2x\). Since \(y\) is treated as a constant in the computation of the partial derivative \(\partial/\partial x\), the second term goes away. The second derivative of \(A\) with respect to \(x\) is \(\partial^2 A/\partial x^2=2\). Similarly we have \(\partial^2 A/\partial y^2=2\), so \(\nabla^2 A=4\). All derivative operators, including \(\nabla^2\), have the linear property that multiplying the input function by a constant just multiplies the output function by the same constant. Since \(B=-A\), and we have \(\nabla^2 B=-4\). For function \(C\), the \(x\) term contributes a second derivative of 2, but the \(y\) term contributes \(-2\), so \(\nabla^2 C=0\). The interpretation of the positive sign in \(\nabla^2 A=4\) is that \(A\)'s graph is shaped like a trophy cup, and the cup is concave up. \(\nabla^2 B\lt0\) is because \(B\) is concave down. Function \(C\) is shaped like a saddle. Since its curvature along one axis is concave up, but the curvature along the other is down and equal in magnitude, the function is considered to have zero concavity over all. Example 21: A classically allowed region with constant \(U\) In a classically allowed region with constant \(U\), we expect the solutions to the Schrödinger equation to be sine waves. A sine wave in three dimensions has the form \[\begin{equation*} \Psi = \sin\left( k_x x + k_y y + k_z z \right) . \end{equation*}\] When we compute \(\partial^2\Psi/\partial x^2\), double differentiation of \(\sin\) gives \(-\sin\), and the chain rule brings out a factor of \(k_x^2\). Applying all three second derivative operators, we get \[\begin{align*} \nabla^2\Psi &= \left(-k_x^2-k_y^2-k_z^2\right)\sin\left( k_x x + k_y y + k_z z \right) \\ &= -\left(k_x^2+k_y^2+k_z^2\right)\Psi . \end{align*}\] The Schrödinger equation gives \[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= -\frac{\hbar^2}{2m}\cdot -\left(k_x^2+k_y^2+k_z^2\right)\Psi + U\cdot\Psi \\ E-U &= \frac{\hbar^2}{2m}\left(k_x^2+k_y^2+k_z^2\right) , \end{align*}\] which can be satisfied since we're in a classically allowed region with \(E-U>0\), and the right-hand side is manifestly positive. s / 1. Oscillations can go back and forth, but it's also possible for them to move along a path that bites its own tail, like a circle. Photons act like one, electrons like the other. 2. Back-and-forth oscillations can naturally be described by a segment taken from the real number line, and we visualize the corresponding type of wave as a sine wave. Oscillations around a closed path relate more naturally to the complex number system. The complex number system has rotation built into its structure, e.g., the sequence 1, \(i\), \(i^2\), \(i^3\), ... rotates around the unit circle in 90-degree increments. 3. The double slit experiment embodies the one and only mystery of quantum physics. Either type of wave can undergo double-slit interference. Use of complex numbers In a classically forbidden region, a particle's total energy, \(U+K\), is less than its \(U\), so its \(K\) must be negative. If we want to keep believing in the equation \(K=p^2/2m\), then apparently the momentum of the particle is the square root of a negative number. This is a symptom of the fact that the Schrödinger equation fails to describe all of nature unless the wavefunction and various other quantities are allowed to be complex numbers. In particular it is not possible to describe traveling waves correctly without using complex wavefunctions. Complex numbers were reviewed in subsection 10.5.5, p. 607. This may seem like nonsense, since real numbers are the only ones that are, well, real! Quantum mechanics can always be related to the real world, however, because its structure is such that the results of measurements always come out to be real numbers. For example, we may describe an electron as having non-real momentum in classically forbidden regions, but its average momentum will always come out to be real (the imaginary parts average out to zero), and it can never transfer a non-real quantity of momentum to another particle. A complete investigation of these issues is beyond the scope of this book, and this is why we have normally limited ourselves to standing waves, which can be described with real-valued wavefunctions. Figure s gives a visual depiction of the difference between real and complex wavefunctions. The following remarks may also be helpful. Neither of the graphs in s/2 should be interpreted as a path traveled by something. This isn't anything mystical about quantum physics. It's just an ordinary fact about waves, which we first encountered in subsection 6.1.1, p. 340, where we saw the distinction between the motion of a wave and the motion of a wave pattern. In both examples in s/2, the wave pattern is moving in a straight line to the right. The helical graph in s/2 shows a complex wavefunction whose value rotates around a circle in the complex plane with a frequency \(f\) related to its energy by \(E=hf\). As it does so, its squared magnitude \(|\Psi|^2\) stays the same, so the corresponding probability stays constant. Which direction does it rotate? This direction is purely a matter of convention, since the distinction between the symbols \(i\) and \(-i\) is arbitrary --- both are equally valid as square roots of \(-1\). We can, for example, arbitrarily say that electrons with positive energies have wavefunctions whose phases rotate counterclockwise, and as long as we follow that rule consistently within a given calculation, everything will work. Note that it is not possible to define anything like a right-hand rule here, because the complex plane shown in the right-hand side of s/2 doesn't represent two dimensions of physical space; unlike a screw going into a piece of wood, an electron doesn't have a direction of rotation that depends on its direction of travel. Example 22: Superposition of complex wavefunctions \(\triangleright\) The right side of figure s/3 is a cartoonish representation of double-slit interference; it depicts the situation at the center, where symmetry guarantees that the interference is constuctive. Suppose that at some off-center point, the two wavefunctions being superposed are \(\Psi_1=b\) and \(\Psi_2=bi\), where \(b\) is a real number with units. Compare the probability of finding the electron at this position with what it would have been if the superposition had been purely constructive, \(b+b=2b\). \(\triangleright\) The probability per unit volume is proportional to the square of the magnitude of the total wavefunction, so we have \[\begin{equation*} \frac{P_{\text{off center}}}{P_{\text{center}}} = \frac{|b+bi|^2}{|b+b|^2} = \frac{1^2+1^2}{2^2+0^2} = \frac{1}{2} . \end{equation*}\] Discussion Questions The zero level of interaction energy \(U\) is arbitrary, e.g., it's equally valid to pick the zero of gravitational energy to be on the floor of your lab or at the ceiling. Suppose we're doing the double-slit experiment, s/3, with electrons. We define the zero-level of \(U\) so that the total energy \(E=U+K\) of each electron is positive. and we observe a certain interference pattern like the one in figure i on p. 850. What happens if we then redefine the zero-level of \(U\) so that the electrons have \(E\lt0\)? The figure shows a series of snapshots in the motion of two pulses on a coil spring, one negative and one positive, as they move toward one another and superpose. The final image is very close to the moment at which the two pulses cancel completely. The following discussion is simpler if we consider infinite sine waves rather than pulses. How can the cancellation of two such mechanical waves be reconciled with conservation of energy? What about the case of colliding electromagnetic waves? Quantum-mechanically, the issue isn't conservation of energy, it's conservation of probability, i.e., if there's initially a 100% probability that a particle exists somewhere, we don't want the probability to be more than or less than 100% at some later time. What happens when the colliding waves have real-valued wavefunctions \(\Psi\)? Complex ones? What happens with standing waves? The figure shows a skateboarder tipping over into a swimming pool with zero initial kinetic energy. There is no friction, the corners are smooth enough to allow the skater to pass over the smoothly, and the vertical distances are small enough so that negligible time is required for the vertical parts of the motion. The pool is divided into a deep end and a shallow end. Their widths are equal. The deep end is four times deeper. (1) Classically, compare the skater's velocity in the left and right regions, and infer the probability of finding the skater in either of the two halves if an observer peeks at a random moment. (2) Quantum-mechanically, this could be a one-dimensional model of an electron shared between two atoms in a diatomic molecule. Compare the electron's kinetic energies, momenta, and wavelengths in the two sides. For simplicity, let's assume that there is no tunneling into the classically forbidden regions. What is the simplest standing-wave pattern that you can draw, and what are the probabilities of finding the electron in one side or the other? Does this obey the correspondence principle? 13.4 The Atom You can learn a lot by taking a car engine apart, but you will have learned a lot more if you can put it all back together again and make it run. Half the job of reductionism is to break nature down into its smallest parts and understand the rules those parts obey. The second half is to show how those parts go together, and that is our goal in this chapter. We have seen how certain features of all atoms can be explained on a generic basis in terms of the properties of bound states, but this kind of argument clearly cannot tell us any details of the behavior of an atom or explain why one atom acts differently from another. The biggest embarrassment for reductionists is that the job of putting things back together job is usually much harder than the taking them apart. Seventy years after the fundamentals of atomic physics were solved, it is only beginning to be possible to calculate accurately the properties of atoms that have many electrons. Systems consisting of many atoms are even harder. Supercomputer manufacturers point to the folding of large protein molecules as a process whose calculation is just barely feasible with their fastest machines. The goal of this chapter is to give a gentle and visually oriented guide to some of the simpler results about atoms. a / Eight wavelengths fit around this circle (\(\ell=8\)). 13.4.1 Classifying states We'll focus our attention first on the simplest atom, hydrogen, with one proton and one electron. We know in advance a little of what we should expect for the structure of this atom. Since the electron is bound to the proton by electrical forces, it should display a set of discrete energy states, each corresponding to a certain standing wave pattern. We need to understand what states there are and what their properties are. What properties should we use to classify the states? The most sensible approach is to used conserved quantities. Energy is one conserved quantity, and we already know to expect each state to have a specific energy. It turns out, however, that energy alone is not sufficient. Different standing wave patterns of the atom can have the same energy. Momentum is also a conserved quantity, but it is not particularly appropriate for classifying the states of the electron in a hydrogen atom. The reason is that the force between the electron and the proton results in the continual exchange of momentum between them. (Why wasn't this a problem for energy as well? Kinetic energy and momentum are related by \(K=p^2/2m\), so the much more massive proton never has very much kinetic energy. We are making an approximation by assuming all the kinetic energy is in the electron, but it is quite a good approximation.) Angular momentum does help with classification. There is no transfer of angular momentum between the proton and the electron, since the force between them is a center-to-center force, producing no torque. Like energy, angular momentum is quantized in quantum physics. As an example, consider a quantum wave-particle confined to a circle, like a wave in a circular moat surrounding a castle. A sine wave in such a “quantum moat” cannot have any old wavelength, because an integer number of wavelengths must fit around the circumference, \(C\), of the moat. The larger this integer is, the shorter the wavelength, and a shorter wavelength relates to greater momentum and angular momentum. Since this integer is related to angular momentum, we use the symbol \(\ell\) for it: \[\begin{equation*} \lambda = C / \ell \end{equation*}\] The angular momentum is \[\begin{equation*} L = rp . \end{equation*}\] Here, \(r=C/2\pi \), and \(p=h/\lambda=h\ell/C\), so \[\begin{align*} L &= \frac{C}{2\pi}\cdot\frac{h\ell}{C} \\ &= \frac{h}{2\pi}\ell \end{align*}\] In the example of the quantum moat, angular momentum is quantized in units of \(h/2\pi \). This makes \(h/2\pi \) a pretty important number, so we define the abbreviation \(\hbar=h/2\pi \). This symbol is read “h-bar.” In fact, this is a completely general fact in quantum physics, not just a fact about the quantum moat: Quantization of angular momentum The angular momentum of a particle due to its motion through space is quantized in units of \(\hbar\). What is the angular momentum of the wavefunction shown at the beginning of the section? b / Reconciling the uncertainty principle with the definition of angular momentum. 13.4.2 Three dimensions Our discussion of quantum-mechanical angular momentum has so far been limited to rotation in a plane, for which we can simply use positive and negative signs to indicate clockwise and counterclockwise directions of rotation. A hydrogen atom, however, is unavoidably three-dimensional. The classical treatment of angular momentum in three-dimensions has been presented in section 4.3; in general, the angular momentum of a particle is defined as the vector cross product \(\mathbf{r}\times\mathbf{p}\). There is a basic problem here: the angular momentum of the electron in a hydrogen atom depends on both its distance \(\mathbf{r}\) from the proton and its momentum \(\mathbf{p}\), so in order to know its angular momentum precisely it would seem we would need to know both its position and its momentum simultaneously with good accuracy. This, however, seems forbidden by the Heisenberg uncertainty principle. Actually the uncertainty principle does place limits on what can be known about a particle's angular momentum vector, but it does not prevent us from knowing its magnitude as an exact integer multiple of \(\hbar\). The reason is that in three dimensions, there are really three separate uncertainty principles: \[\begin{align*} \Delta p_x \Delta x &\gtrsim h \\ \Delta p_y \Delta y &\gtrsim h \\ \Delta p_z \Delta z &\gtrsim h \end{align*}\] Now consider a particle, b/1, that is moving along the \(x\) axis at position \(x\) and with momentum \(p_x\). We may not be able to know both \(x\) and \(p_x\) with unlimited accurately, but we can still know the particle's angular momentum about the origin exactly: it is zero, because the particle is moving directly away from the origin. Suppose, on the other hand, a particle finds itself, b/2, at a position \(x\) along the \(x\) axis, and it is moving parallel to the \(y\) axis with momentum \(p_y\). It has angular momentum \(xp_y\) about the \(z\) axis, and again we can know its angular momentum with unlimited accuracy, because the uncertainty principle only relates \(x\) to \(p_x\) and \(y\) to \(p_y\). It does not relate \(x\) to \(p_y\). As shown by these examples, the uncertainty principle does not restrict the accuracy of our knowledge of angular momenta as severely as might be imagined. However, it does prevent us from knowing all three components of an angular momentum vector simultaneously. The most general statement about this is the following theorem, which we present without proof: The angular momentum vector in quantum physics The most that can be known about an angular momentum vector is its magnitude and one of its three vector components. Both are quantized in units of \(\hbar\). c / A cross-section of a hydrogen wavefunction. d / The energy of a state in the hydrogen atom depends only on its \(n\) quantum number. 13.4.3 The hydrogen atom Deriving the wavefunctions of the states of the hydrogen atom from first principles would be mathematically too complex for this book, but it's not hard to understand the logic behind such a wavefunction in visual terms. Consider the wavefunction from the beginning of the section, which is reproduced in figure c. Although the graph looks three-dimensional, it is really only a representation of the part of the wavefunction lying within a two-dimensional plane. The third (up-down) dimension of the plot represents the value of the wavefunction at a given point, not the third dimension of space. The plane chosen for the graph is the one perpendicular to the angular momentum vector. Each ring of peaks and valleys has eight wavelengths going around in a circle, so this state has \(L=8\hbar\), i.e., we label it \(\ell=8\). The wavelength is shorter near the center, and this makes sense because when the electron is close to the nucleus it has a lower electrical energy, a higher kinetic energy, and a higher momentum. Between each ring of peaks in this wavefunction is a nodal circle, i.e., a circle on which the wavefunction is zero. The full three-dimensional wavefunction has nodal spheres: a series of nested spherical surfaces on which it is zero. The number of radii at which nodes occur, including \(r=\infty\), is called \(n\), and \(n\) turns out to be closely related to energy. The ground state has \(n=1\) (a single node only at \(r=\infty\)), and higher-energy states have higher \(n\) values. There is a simple equation relating \(n\) to energy, which we will discuss in subsection 13.4.4. The numbers \(n\) and \(\ell\), which identify the state, are called its quantum numbers. A state of a given \(n\) and \(\ell\) can be oriented in a variety of directions in space. We might try to indicate the orientation using the three quantum numbers \(\ell_x=L_x/\hbar\), \(\ell_y=L_y/\hbar\), and \(\ell_z=L_z/\hbar\). But we have already seen that it is impossible to know all three of these simultaneously. To give the most complete possible description of a state, we choose an arbitrary axis, say the \(z\) axis, and label the state according to \(n\), \(\ell\), and \(\ell_z\).6 Angular momentum requires motion, and motion implies kinetic energy. Thus it is not possible to have a given amount of angular momentum without having a certain amount of kinetic energy as well. Since energy relates to the \(n\) quantum number, this means that for a given \(n\) value there will be a maximum possible . It turns out that this maximum value of equals \(n-1\). In general, we can list the possible combinations of quantum numbers as follows: n can equal 1, 2, 3, … ell can range from 0 ton − 1, in steps of 1 ellz can range fromell toell, in steps of 1 Applying these rules, we have the following list of states: n = 1,ell0, ellz0 one state n = 2, ell0, ellz0 one state n = 2, ell1,ellz1, 0, or 1three states Continue the list for \(n=3\). Figure e on page 888 shows the lowest-energy states of the hydrogen atom. The left-hand column of graphs displays the wavefunctions in the \(x-y\) plane, and the right-hand column shows the probability distribution in a three-dimensional representation. e / The three states of the hydrogen atom having the lowest energies. Discussion Questions The quantum number \(n\) is defined as the number of radii at which the wavefunction is zero, including \(r=\infty\). Relate this to the features of figure e. Based on the definition of \(n\), why can't there be any such thing as an \(n=0\) state? Relate the features of the wavefunction plots in figure e to the corresponding features of the probability distribution pictures. How can you tell from the wavefunction plots in figure e which ones have which angular momenta? Criticize the following incorrect statement: “The \(\ell=8\) wavefunction in figure c has a shorter wavelength in the center because in the center the electron is in a higher energy level.” Discuss the implications of the fact that the probability cloud in of the \(n=2\), \(\ell=1\) state is split into two parts. f / The energy levels of a particle in a box, contrasted with those of the hydrogen atom. 13.4.4 Energies of states in hydrogen The experimental technique for measuring the energy levels of an atom accurately is spectroscopy: the study of the spectrum of light emitted (or absorbed) by the atom. Only photons with certain energies can be emitted or absorbed by a hydrogen atom, for example, since the amount of energy gained or lost by the atom must equal the difference in energy between the atom's initial and final states. Spectroscopy had become a highly developed art several decades before Einstein even proposed the photon, and the Swiss spectroscopist Johann Balmer determined in 1885 that there was a simple equation that gave all the wavelengths emitted by hydrogen. In modern terms, we think of the photon wavelengths merely as indirect evidence about the underlying energy levels of the atom, and we rework Balmer's result into an equation for these atomic energy levels: \[\begin{equation*} E_n = -\frac{2.2\times10^{-18}\ \text{J}}{n^2} , \end{equation*}\] This energy includes both the kinetic energy of the electron and the electrical energy. The zero-level of the electrical energy scale is chosen to be the energy of an electron and a proton that are infinitely far apart. With this choice, negative energies correspond to bound states and positive energies to unbound ones. Where does the mysterious numerical factor of \(2.2\times10^{-18}\ \text{J}\) come from? In 1913 the Danish theorist Niels Bohr realized that it was exactly numerically equal to a certain combination of fundamental physical constants: \[\begin{equation*} E_n = -\frac{mk^2e^4}{2\hbar^2}\cdot\frac{1}{n^2} , \end{equation*}\] where \(m\) is the mass of the electron, and \(k\) is the Coulomb force constant for electric forces. Bohr was able to cook up a derivation of this equation based on the incomplete version of quantum physics that had been developed by that time, but his derivation is today mainly of historical interest. It assumes that the electron follows a circular path, whereas the whole concept of a path for a particle is considered meaningless in our more complete modern version of quantum physics. Although Bohr was able to produce the right equation for the energy levels, his model also gave various wrong results, such as predicting that the atom would be flat, and that the ground state would have \(\ell=1\) rather than the correct \(\ell=0\). Approximate treatment Rather than leaping straight into a full mathematical treatment, we'll start by looking for some physical insight, which will lead to an approximate argument that correctly reproduces the form of the Bohr equation. A typical standing-wave pattern for the electron consists of a central oscillating area surrounded by a region in which the wavefunction tails off. As discussed in subsection 13.3.6, the oscillating type of pattern is typically encountered in the classically allowed region, while the tailing off occurs in the classically forbidden region where the electron has insufficient kinetic energy to penetrate according to classical physics. We use the symbol \(r\) for the radius of the spherical boundary between the classically allowed and classically forbidden regions. Classically, \(r\) would be the distance from the proton at which the electron would have to stop, turn around, and head back in. If \(r\) had the same value for every standing-wave pattern, then we'd essentially be solving the particle-in-a-box problem in three dimensions, with the box being a spherical cavity. Consider the energy levels of the particle in a box compared to those of the hydrogen atom, f. They're qualitatively different. The energy levels of the particle in a box get farther and farther apart as we go higher in energy, and this feature doesn't even depend on the details of whether the box is two-dimensional or three-dimensional, or its exact shape. The reason for the spreading is that the box is taken to be completely impenetrable, so its size, \(r\), is fixed. A wave pattern with \(n\) humps has a wavelength proportional to \(r/n\), and therefore a momentum proportional to \(n\), and an energy proportional to \(n^2\). In the hydrogen atom, however, the force keeping the electron bound isn't an infinite force encountered when it bounces off of a wall, it's the attractive electrical force from the nucleus. If we put more energy into the electron, it's like throwing a ball upward with a higher energy --- it will get farther out before coming back down. This means that in the hydrogen atom, we expect \(r\) to increase as we go to states of higher energy. This tends to keep the wavelengths of the high energy states from getting too short, reducing their kinetic energy. The closer and closer crowding of the energy levels in hydrogen also makes sense because we know that there is a certain energy that would be enough to make the electron escape completely, and therefore the sequence of bound states cannot extend above that energy. When the electron is at the maximum classically allowed distance \(r\) from the proton, it has zero kinetic energy. Thus when the electron is at distance \(r\), its energy is purely electrical: \[\begin{equation*} E = -\frac{ke^2}{r} \end{equation*}\] Now comes the approximation. In reality, the electron's wavelength cannot be constant in the classically allowed region, but we pretend that it is. Since \(n\) is the number of nodes in the wavefunction, we can interpret it approximately as the number of wavelengths that fit across the diameter \(2r\). We are not even attempting a derivation that would produce all the correct numerical factors like 2 and \(\pi \) and so on, so we simply make the approximation \[\begin{equation*} \lambda \sim \frac{r}{n} . \end{equation*}\] Finally we assume that the typical kinetic energy of the electron is on the same order of magnitude as the absolute value of its total energy. (This is true to within a factor of two for a typical classical system like a planet in a circular orbit around the sun.) We then have \begin{subequations} \renewcommand{\theequation}{\theparentequation} \[\begin{align*} \text{absolute}&\text{ value of total energy} \\ &= \frac{ke^2}{r} \notag \\ &\sim K \notag \ &= p^2/2m \notag \\ &= (h/\lambda)^2/2m \notag \\ &\sim h^2n^2/2mr^2 \notag \end{align*}\] \end{subequations} We now solve the equation \(ke^2/r \sim h^2n^2 / 2mr^2\) for \(r\) and throw away numerical factors we can't hope to have gotten right, yielding \[\begin{equation*} r \sim \frac{h^2n^2}{mke^2} . \end{equation*}\] Plugging \(n=1\) into this equation gives \(r=2\) nm, which is indeed on the right order of magnitude. Finally we combine equations [4] and [1] to find \[\begin{equation*} E \sim -\frac{mk^2e^4}{h^2n^2} , \end{equation*}\] which is correct except for the numerical factors we never aimed to find. Exact treatment of the ground state The general proof of the Bohr equation for all values of \(n\) is beyond the mathematical scope of this book, but it's fairly straightforward to verify it for a particular \(n\), especially given a lucky guess as to what functional form to try for the wavefunction. The form that works for the ground state is \[\begin{equation*} \Psi = ue^{-r/a} , \end{equation*}\] where \(r=\sqrt{x^2+y^2+z^2}\) is the electron's distance from the proton, and \(u\) provides for normalization. In the following, the result \(\partial r/\partial x=x/r\) comes in handy. Computing the partial derivatives that occur in the Laplacian, we obtain for the \(x\) term \[\begin{align*} \frac{\partial\Psi}{\partial x} &= \frac{\partial \Psi}{\partial r} \frac{\partial r}{\partial x} \\ &= -\frac{x}{ar} \Psi \\ \frac{\partial^2\Psi}{\partial x^2} &= -\frac{1}{ar} \Psi -\frac{x}{a}\left(\frac{\partial}{dx}\frac{1}{r}\right)\Psi+ \left( \frac{x}{ar}\right)^2 \Psi\\ &= -\frac{1}{ar} \Psi +\frac{x^2}{ar^3}\Psi+ \left( \frac{x}{ar}\right)^2 \Psi , \text{so} \nabla^2\Psi &= \left( -\frac{2}{ar} + \frac{1}{a^2} \right) \Psi . \end{align*}\] The Schrödinger equation gives \[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= \frac{\hbar^2}{2m}\left( \frac{2}{ar} - \frac{1}{a^2} \right)\Psi -\frac{ke^2}{r}\cdot\Psi \end{align*}\] If we require this equation to hold for all \(r\), then we must have equality for both the terms of the form \((\text{constant})\times\Psi\) and for those of the form \((\text{constant}/r)\times\Psi\). That means \[\begin{align*} E &= -\frac{\hbar^2}{2ma^2} \\ \text{and} 0 &= \frac{\hbar^2}{mar} -\frac{ke^2}{r} . \end{align*}\] These two equations can be solved for the unknowns \(a\) and \(E\), giving \[\begin{align*} a &= \frac{\hbar^2}{mke^2} \\ \text{and} E &= -\frac{mk^2e^4}{2\hbar^2} , \end{align*}\] where the result for the energy agrees with the Bohr equation for \(n=1\). The calculation of the normalization constant \(u\) is relegated to homework problem 36. We've verified that the function \(\Psi = he^{-r/a}\) is a solution to the Schrödinger equation, and yet it has a kink in it at \(r=0\). What's going on here? Didn't I argue before that kinks are unphysical? Example 23: Wave phases in the hydrogen molecule In example 16 on page 867, I argued that the existence of the \(\text{H}_2\) molecule could essentially be explained by a particle-in-a-box argument: the molecule is a bigger box than an individual atom, so each electron's wavelength can be longer, its kinetic energy lower. Now that we're in possession of a mathematical expression for the wavefunction of the hydrogen atom in its ground state, we can make this argument a little more rigorous and detailed. Suppose that two hydrogen atoms are in a relatively cool sample of monoatomic hydrogen gas. Because the gas is cool, we can assume that the atoms are in their ground states. Now suppose that the two atoms approach one another. Making use again of the assumption that the gas is cool, it is reasonable to imagine that the atoms approach one another slowly. Now the atoms come a little closer, but still far enough apart that the region between them is classically forbidden. Each electron can tunnel through this classically forbidden region, but the tunneling probability is small. Each one is now found with, say, 99% probability in its original home, but with 1% probability in the other nucleus. Each electron is now in a state consisting of a superposition of the ground state of its own atom with the ground state of the other atom. There are two peaks in the superposed wavefunction, but one is a much bigger peak than the other. An interesting question now arises. What are the relative phases of the two electrons? As discussed on page 861, the absolute phase of an electron's wavefunction is not really a meaningful concept. Suppose atom A contains electron Alice, and B electron Bob. Just before the collision, Alice may have wondered, “Is my phase positive right now, or is it negative? But of course I shouldn't ask myself such silly questions,” she adds sheepishly. g / Example 23. But relative phases are well defined. As the two atoms draw closer and closer together, the tunneling probability rises, and eventually gets so high that each electron is spending essentially 50% of its time in each atom. It's now reasonable to imagine that either one of two possibilities could obtain. Alice's wavefunction could either look like g/1, with the two peaks in phase with one another, or it could look like g/2, with opposite phases. Because relative phases of wavefunctions are well defined, states 1 and 2 are physically distinguishable. In particular, the kinetic energy of state 2 is much higher; roughly speaking, it is like the two-hump wave pattern of the particle in a box, as opposed to 1, which looks roughly like the one-hump pattern with a much longer wavelength. Not only that, but an electron in state 1 has a large probability of being found in the central region, where it has a large negative electrical energy due to its interaction with both protons. State 2, on the other hand, has a low probability of existing in that region. Thus state 1 represents the true ground-state wavefunction of the \(\text{H}_2\) molecule, and putting both Alice and Bob in that state results in a lower energy than their total energy when separated, so the molecule is bound, and will not fly apart spontaneously. State g/3, on the other hand, is not physically distinguishable from g/2, nor is g/4 from g/1. Alice may say to Bob, “Isn't it wonderful that we're in state 1 or 4? I love being stable like this.” But she knows it's not meaningful to ask herself at a given moment which state she's in, 1 or 4. Discussion Questions States of hydrogen with \(n\) greater than about 10 are never observed in the sun. Why might this be? Sketch graphs of \(r\) and \(E\) versus \(n\) for the hydrogen, and compare with analogous graphs for the one-dimensional particle in a box. h / The top has angular momentum both because of the motion of its center of mass through space and due to its internal rotation. Electron spin is roughly analogous to the intrinsic spin of the top. 13.4.5 Electron spin It's disconcerting to the novice ping-pong player to encounter for the first time a more skilled player who can put spin on the ball. Even though you can't see that the ball is spinning, you can tell something is going on by the way it interacts with other objects in its environment. In the same way, we can tell from the way electrons interact with other things that they have an intrinsic spin of their own. Experiments show that even when an electron is not moving through space, it still has angular momentum amounting to \(\hbar/2\). This may seem paradoxical because the quantum moat, for instance, gave only angular momenta that were integer multiples of \(\hbar\), not half-units, and I claimed that angular momentum was always quantized in units of \(\hbar\), not just in the case of the quantum moat. That whole discussion, however, assumed that the angular momentum would come from the motion of a particle through space. The \(\hbar/2\) angular momentum of the electron is simply a property of the particle, like its charge or its mass. It has nothing to do with whether the electron is moving or not, and it does not come from any internal motion within the electron. Nobody has ever succeeded in finding any internal structure inside the electron, and even if there was internal structure, it would be mathematically impossible for it to result in a half-unit of angular momentum. We simply have to accept this \(\hbar/2\) angular momentum, called the “spin” of the electron --- Mother Nature rubs our noses in it as an observed fact. Protons and neutrons have the same \(\hbar/2\) spin, while photons have an intrinsic spin of \(\hbar\). In general, half-integer spins are typical of material particles. Integral values are found for the particles that carry forces: photons, which embody the electric and magnetic fields of force, as well as the more exotic messengers of the nuclear and gravitational forces. As was the case with ordinary angular momentum, we can describe spin angular momentum in terms of its magnitude, and its component along a given axis. We write \(s\) and \(s_z\) for these quantities, expressed in units of \(\hbar\), so an electron has \(s=1/2\) and \(s_z=+1/2\) or \(-1/2\). Taking electron spin into account, we need a total of four quantum numbers to label a state of an electron in the hydrogen atom: \(n\), \(\ell\), \(\ell_z\), and \(s_z\). (We omit \(s\) because it always has the same value.) The symbols and include only the angular momentum the electron has because it is moving through space, not its spin angular momentum. The availability of two possible spin states of the electron leads to a doubling of the numbers of states: n = 1,ell0, ellz0, sz = + 1 / 2 or − 1 / 2two states n = 2, ell0, ellz0, sz = + 1 / 2 or − 1 / 2two states n = 2, ell1,ellz1, 0, or 1,sz = + 1 / 2 or − 1 / 2six states A note about notation There are unfortunately two inconsistent systems of notation for the quantum numbers we've been discussing. The notation I've been using is the one that is used in nuclear physics, but there is a different one that is used in atomic physics. nuclear physics atomic physics n same ell same ellx no notation elly no notation s = 1 / 2 no notation (sometimesσ) sx no notation sy no notation sz s {}The nuclear physics notation is more logical (not giving special status to the \(z\) axis) and more memorable (\(\ell_z\) rather than the obscure \(m\)), which is why I use it consistently in this book, even though nearly all the applications we'll consider are atomic ones. We are further encumbered with the following historically derived letter labels, which deserve to be eliminated in favor of the simpler numerical ones: s p d f n = 1n = 2n = 3n = 4n = 5n = 6n = 7 {}The spdf labels are used in both nuclear7 and atomic physics, while the KLMNOPQ letters are used only to refer to states of electrons. And finally, there is a piece of notation that is good and useful, but which I simply haven't mentioned yet. The vector \(\mathbf{j}=\vc{\ell}+\mathbf{s}\) stands for the total angular momentum of a particle in units of \(\hbar\), including both orbital and spin parts. This quantum number turns out to be very useful in nuclear physics, because nuclear forces tend to exchange orbital and spin angular momentum, so a given energy level often contains a mixture of \(\ell\) and \(s\) values, while remaining fairly pure in terms of \(j\). i / The beginning of the periodic table. j / Hydrogen is highly reactive. 13.4.6 Atoms with more than one electron What about other atoms besides hydrogen? It would seem that things would get much more complex with the addition of a second electron. A hydrogen atom only has one particle that moves around much, since the nucleus is so heavy and nearly immobile. Helium, with two, would be a mess. Instead of a wavefunction whose square tells us the probability of finding a single electron at any given location in space, a helium atom would need to have a wavefunction whose square would tell us the probability of finding two electrons at any given combination of points. Ouch! In addition, we would have the extra complication of the electrical interaction between the two electrons, rather than being able to imagine everything in terms of an electron moving in a static field of force created by the nucleus alone. Despite all this, it turns out that we can get a surprisingly good description of many-electron atoms simply by assuming the electrons can occupy the same standing-wave patterns that exist in a hydrogen atom. The ground state of helium, for example, would have both electrons in states that are very similar to the \(n=1\) states of hydrogen. The second-lowest-energy state of helium would have one electron in an \(n=1\) state, and the other in an \(n=2\) states. The relatively complex spectra of elements heavier than hydrogen can be understood as arising from the great number of possible combinations of states for the electrons. A surprising thing happens, however, with lithium, the three-electron atom. We would expect the ground state of this atom to be one in which all three electrons settle down into \(n=1\) states. What really happens is that two electrons go into \(n=1\) states, but the third stays up in an \(n=2\) state. This is a consequence of a new principle of physics: The Pauli Exclusion Principle Only one electron can ever occupy a given state. There are two \(n=1\) states, one with \(s_z=+1/2\) and one with \(s_z=-1/2\), but there is no third \(n=1\) state for lithium's third electron to occupy, so it is forced to go into an \(n=2\) state. It can be proved mathematically that the Pauli exclusion principle applies to any type of particle that has half-integer spin. Thus two neutrons can never occupy the same state, and likewise for two protons. Photons, however, are immune to the exclusion principle because their spin is an integer. Deriving the periodic table We can now account for the structure of the periodic table, which seemed so mysterious even to its inventor Mendeleev. The first row consists of atoms with electrons only in the \(n=1\) states: 1 electron in ann = 1 state 2 electrons in the twon = 1 states The next row is built by filling the \(n=2\) energy levels: 2 electrons inn = 1 states, 1 electron in ann = 2 state 2 electrons inn = 1 states, 2 electrons inn = 2 states 2 electrons inn = 1 states, 6 electrons inn = 2 states 2 electrons inn = 1 states, 7 electrons inn = 2 states 2 electrons inn = 1 states, 8 electrons inn = 2 states In the third row we start in on the \(n=3\) levels: 2 electrons inn = 1 states, 8 electrons inn = 2 states, 1 electron in ann = 3 state We can now see a logical link between the filling of the energy levels and the structure of the periodic table. Column 0, for example, consists of atoms with the right number of electrons to fill all the available states up to a certain value of \(n\). Column I contains atoms like lithium that have just one electron more than that. This shows that the columns relate to the filling of energy levels, but why does that have anything to do with chemistry? Why, for example, are the elements in columns I and VII dangerously reactive? Consider, for example, the element sodium (Na), which is so reactive that it may burst into flames when exposed to air. The electron in the \(n=3\) state has an unusually high energy. If we let a sodium atom come in contact with an oxygen atom, energy can be released by transferring the \(n=3\) electron from the sodium to one of the vacant lower-energy \(n=2\) states in the oxygen. This energy is transformed into heat. Any atom in column I is highly reactive for the same reason: it can release energy by giving away the electron that has an unusually high energy. Column VII is spectacularly reactive for the opposite reason: these atoms have a single vacancy in a low-energy state, so energy is released when these atoms steal an electron from another atom. It might seem as though these arguments would only explain reactions of atoms that are in different rows of the periodic table, because only in these reactions can a transferred electron move from a higher-\(n\) state to a lower-\(n\) state. This is incorrect. An \(n=2\) electron in fluorine (F), for example, would have a different energy than an \(n=2\) electron in lithium (Li), due to the different number of protons and electrons with which it is interacting. Roughly speaking, the \(n=2\) electron in fluorine is more tightly bound (lower in energy) because of the larger number of protons attracting it. The effect of the increased number of attracting protons is only partly counteracted by the increase in the number of repelling electrons, because the forces exerted on an electron by the other electrons are in many different directions and cancel out partially. Homework Problems a / Problem 6. b / Problem 15. c / Problem 16. d / Problem 25. e / Problem 43. 1. If a radioactive substance has a half-life of one year, does this mean that it will be completely decayed after two years? Explain. 2. What is the probability of rolling a pair of dice and getting “snake eyes,” i.e., both dice come up with ones? 3. Problem 3 has been deleted. 4. Problem 4 has been deleted. 5. Refer to the probability distribution for people's heights in figure f on page 834. (a) Show that the graph is properly normalized. (b) Estimate the fraction of the population having heights between 140 and 150 cm.(answer check available at 6. (a) A nuclear physicist is studying a nuclear reaction caused in an accelerator experiment, with a beam of ions from the accelerator striking a thin metal foil and causing nuclear reactions when a nucleus from one of the beam ions happens to hit one of the nuclei in the target. After the experiment has been running for a few hours, a few billion radioactive atoms have been produced, embedded in the target. She does not know what nuclei are being produced, but she suspects they are an isotope of some heavy element such as Pb, Bi, Fr or U. Following one such experiment, she takes the target foil out of the accelerator, sticks it in front of a detector, measures the activity every 5 min, and makes a graph (figure). The isotopes she thinks may have been produced are: isotope half-life (minutes) 211textupPb 36.1 214textupPb 26.8 214textupBi 19.7 223textupFr 21.8 239textupU 23.5 Which one is it? (b) Having decided that the original experimental conditions produced one specific isotope, she now tries using beams of ions traveling at several different speeds, which may cause different reactions. The following table gives the activity of the target 10, 20 and 30 minutes after the end of the experiment, for three different ion speeds. activity (millions of decays/s) after… 10 min20 min30 min first ion speed 1.933 0.832 0.382 second ion speed1.200 0.545 0.248 third ion speed 7.211 1.296 0.248 Since such a large number of decays is being counted, assume that the data are only inaccurate due to rounding off when writing down the table. Which are consistent with the production of a single isotope, and which imply that more than one isotope was being created? 7. Devise a method for testing experimentally the hypothesis that a gambler's chance of winning at craps is independent of her previous record of wins and losses. If you don't invoke the mathematical definition of statistical independence, then you haven't proposed a test. 8. A blindfolded person fires a gun at a circular target of radius \(b\), and is allowed to continue firing until a shot actually hits it. Any part of the target is equally likely to get hit. We measure the random distance \(r\) from the center of the circle to where the bullet went in. (a) Show that the probability distribution of \(r\) must be of the form \(D(r)=kr\), where \(k\) is some constant. (Of course we have \(D(r)=0\) for \(r>b\).) (b) Determine \(k\) by requiring \(D\) to be properly normalized.(answer check available at (c) Find the average value of \(r\).(answer check available at (d) Interpreting your result from part c, how does it compare with \(b/2\)? Does this make sense? Explain. 9. We are given some atoms of a certain radioactive isotope, with half-life \(t_{1/2}\). We pick one atom at random, and observe it for one half-life, starting at time zero. If it decays during that one-half-life period, we record the time \(t\) at which the decay occurred. If it doesn't, we reset our clock to zero and keep trying until we get an atom that cooperates. The final result is a time \(0\le t\le t_{1/2}\), with a distribution that looks like the usual exponential decay curve, but with its tail chopped off. (a) Find the distribution \(D(t)\), with the proper normalization.(answer check available at (b) Find the average value of \(t\).(answer check available at (c) Interpreting your result from part b, how does it compare with \(t_{1/2}/2\)? Does this make sense? Explain. 10. The speed, \(v\), of an atom in an ideal gas has a probability distribution of the form \(D(v) = bve^{-cv^2}\), where \(0\le v \lt \infty\), \(c\) relates to the temperature, and \(b\) is determined by normalization. (a) Sketch the distribution. (b) Find \(b\) in terms of \(c\).(answer check available at (c) Find the average speed in terms of \(c\), eliminating \(b\). (Don't try to do the indefinite integral, because it can't be done in closed form. The relevant definite integral can be found in tables or done with computer software.)(answer check available at 11. All helium on earth is from the decay of naturally occurring heavy radioactive elements such as uranium. Each alpha particle that is emitted ends up claiming two electrons, which makes it a helium atom. If the original \(^{238}\text{U}\) atom is in solid rock (as opposed to the earth's molten regions), the He atoms are unable to diffuse out of the rock. This problem involves dating a rock using the known decay properties of uranium 238. Suppose a geologist finds a sample of hardened lava, melts it in a furnace, and finds that it contains 1230 mg of uranium and 2.3 mg of helium. \(^{238}\text{U}\) decays by alpha emission, with a half-life of \(4.5\times10^9\) years. The subsequent chain of alpha and electron (beta) decays involves much shorter half-lives, and terminates in the stable nucleus \(^{206}\text{Pb}\). Almost all natural uranium is \(^{238}\text{U}\), and the chemical composition of this rock indicates that there were no decay chains involved other than that of \(^{238}\text{U}\). (a) How many alphas are emitted per decay chain? [Hint: Use conservation of mass.] (b) How many electrons are emitted per decay chain? [Hint: Use conservation of charge.] (c) How long has it been since the lava originally hardened?(answer check available at 12. When light is reflected from a mirror, perhaps only 80% of the energy comes back. One could try to explain this in two different ways: (1) 80% of the photons are reflected, or (2) all the photons are reflected, but each loses 20% of its energy. Based on your everyday knowledge about mirrors, how can you tell which interpretation is correct? [Based on a problem from PSSC Physics.] 13. Suppose we want to build an electronic light sensor using an apparatus like the one described in subsection 13.2.2 on p. 844. How would its ability to detect different parts of the spectrum depend on the type of metal used in the capacitor plates? 14. The photoelectric effect can occur not just for metal cathodes but for any substance, including living tissue. Ionization of DNA molecules can cause cancer or birth defects. If the energy required to ionize DNA is on the same order of magnitude as the energy required to produce the photoelectric effect in a metal, which of the following types of electromagnetic waves might pose such a hazard? Explain. 60 Hz waves from power lines 100 MHz FM radio microwaves from a microwave oven visible light ultraviolet light 15. (a) Rank-order the photons according to their wavelengths, frequencies, and energies. If two are equal, say so. Explain all your answers. (b) Photon 3 was emitted by a xenon atom going from its second-lowest-energy state to its lowest-energy state. Which of photons 1, 2, and 4 are capable of exciting a xenon atom from its lowest-energy state to its second-lowest-energy state? Explain. 16. Which figure could be an electron speeding up as it moves to the right? Explain. 17. The beam of a 100-W overhead projector covers an area of \(1\ \text{m}\times1\ \text{m}\) when it hits the screen 3 m away. Estimate the number of photons that are in flight at any given time. (Since this is only an estimate, we can ignore the fact that the beam is not parallel.)(answer check available at 18. In the photoelectric effect, electrons are observed with virtually no time delay (\(\sim10\) ns), even when the light source is very weak. (A weak light source does however only produce a small number of ejected electrons.) The purpose of this problem is to show that the lack of a significant time delay contradicted the classical wave theory of light, so throughout this problem you should put yourself in the shoes of a classical physicist and pretend you don't know about photons at all. At that time, it was thought that the electron might have a radius on the order of \(10^{-15}\) m. (Recent experiments have shown that if the electron has any finite size at all, it is far smaller.) (a) Estimate the power that would be soaked up by a single electron in a beam of light with an intensity of 1 \(\text{mW}/\text{m}^2\).(answer check available at (b) The energy, \(E_s\), required for the electron to escape through the surface of the cathode is on the order of \(10^{-19}\) J. Find how long it would take the electron to absorb this amount of energy, and explain why your result constitutes strong evidence that there is something wrong with the classical theory.(answer check available at 19. In a television, suppose the electrons are accelerated from rest through a voltage difference of \(10^4\) V. What is their final wavelength?(answer check available at 20. Use the Heisenberg uncertainty principle to estimate the minimum velocity of a proton or neutron in a \(^{208}\text{Pb}\) nucleus, which has a diameter of about 13 fm (1 fm=\(10^{-15}\) m). Assume that the speed is nonrelativistic, and then check at the end whether this assumption was warranted.(answer check available at 21. Find the energy of a particle in a one-dimensional box of length \(L\), expressing your result in terms of \(L\), the particle's mass \(m\), the number of peaks and valleys \(n\) in the wavefunction, and fundamental constants.(answer check available at 22. A free electron that contributes to the current in an ohmic material typically has a speed of \(10^5\) m/s (much greater than the drift velocity). (a) Estimate its de Broglie wavelength, in nm.(answer check available at (b) If a computer memory chip contains \(10^8\) electric circuits in a 1 \(\text{cm}^2\) area, estimate the linear size, in nm, of one such circuit.(answer check available at (c) Based on your answers from parts a and b, does an electrical engineer designing such a chip need to worry about wave effects such as diffraction? (d) Estimate the maximum number of electric circuits that can fit on a 1 \(\text{cm}^2\) computer chip before quantum-mechanical effects become important. 23. In classical mechanics, an interaction energy of the form \(U(x)=\frac{1}{2}kx^2\) gives a harmonic oscillator: the particle moves back and forth at a frequency \(\omega=\sqrt{k/m}\). This form for \(U(x)\) is often a good approximation for an individual atom in a solid, which can vibrate around its equilibrium position at \(x=0\). (For simplicity, we restrict our treatment to one dimension, and we treat the atom as a single particle rather than as a nucleus surrounded by electrons). The atom, however, should be treated quantum-mechanically, not clasically. It will have a wave function. We expect this wave function to have one or more peaks in the classically allowed region, and we expect it to tail off in the classically forbidden regions to the right and left. Since the shape of \(U(x)\) is a parabola, not a series of flat steps as in figure m on page 875, the wavy part in the middle will not be a sine wave, and the tails will not be exponentials. (a) Show that there is a solution to the Schrödinger equation of the form \[\begin{equation*} \Psi(x)=e^{-bx^2} , \end{equation*}\] and relate \(b\) to \(k\), \(m\), and \(\hbar\). To do this, calculate the second derivative, plug the result into the Schrödinger equation, and then find what value of \(b\) would make the equation valid for all values of \(x\). This wavefunction turns out to be the ground state. Note that this wavefunction is not properly normalized --- don't worry about that.(answer check available at (b) Sketch a graph showing what this wavefunction looks like. (c) Let's interpret \(b\). If you changed \(b\), how would the wavefunction look different? Demonstrate by sketching two graphs, one for a smaller value of \(b\), and one for a larger value. (d) Making \(k\) greater means making the atom more tightly bound. Mathematically, what happens to the value of \(b\) in your result from part a if you make \(k\) greater? Does this make sense physically when you compare with part c? 24. (a) A distance scale is shown below the wavefunctions and probability densities illustrated in figure e on page 888. Compare this with the order-of-magnitude estimate derived in subsection 13.4.4 for the radius \(r\) at which the wavefunction begins tailing off. Was the estimate on the right order of magnitude? (b) Although we normally say the moon orbits the earth, actually they both orbit around their common center of mass, which is below the earth's surface but not at its center. The same is true of the hydrogen atom. Does the center of mass lie inside the proton, or outside it? 25. The figure shows eight of the possible ways in which an electron in a hydrogen atom could drop from a higher energy state to a state of lower energy, releasing the difference in energy as a photon. Of these eight transitions, only D, E, and F produce photons with wavelengths in the visible spectrum. (a) Which of the visible transitions would be closest to the violet end of the spectrum, and which would be closest to the red end? Explain. (b) In what part of the electromagnetic spectrum would the photons from transitions A, B, and C lie? What about G and H? Explain. (c) Is there an upper limit to the wavelengths that could be emitted by a hydrogen atom going from one bound state to another bound state? Is there a lower limit? Explain. 26. Find an equation for the wavelength of the photon emitted when the electron in a hydrogen atom makes a transition from energy level \(n_1\) to level \(n_2\).(answer check available at 27. Estimate the angular momentum of a spinning basketball, in units of \(\hbar\). Explain how this result relates to the correspondence principle. 28. Assume that the kinetic energy of an electron the \(n=1\) state of a hydrogen atom is on the same order of magnitude as the absolute value of its total energy, and estimate a typical speed at which it would be moving. (It cannot really have a single, definite speed, because its kinetic and interaction energy trade off at different distances from the proton, but this is just a rough estimate of a typical speed.) Based on this speed, were we justified in assuming that the electron could be described nonrelativistically? 29. Before the quantum theory, experimentalists noted that in many cases, they would find three lines in the spectrum of the same atom that satisfied the following mysterious rule: \(1/\lambda_1=1/\lambda_2+1/\lambda_3\). Explain why this would occur. Do not use reasoning that only works for hydrogen --- such combinations occur in the spectra of all elements. [Hint: Restate the equation in terms of the energies of photons.] 30. The wavefunction of the electron in the ground state of a hydrogen atom is \[\begin{equation*} \Psi = \pi^{-1/2} a^{-3/2} e^{-r/a} , \end{equation*}\] where \(r\) is the distance from the proton, and \(a=5.3\times10^{-11}\) m is a constant that sets the size of the wave. (a) Calculate symbolically, without plugging in numbers, the probability that at any moment, the electron is inside the proton. Assume the proton is a sphere with a radius of \(b=0.5\) fm. [Hint: Does it matter if you plug in \(r=0\) or \(r=b\) in the equation for the wavefunction?](answer check available at (b) Calculate the probability numerically.(answer check available at (c) Based on the equation for the wavefunction, is it valid to think of a hydrogen atom as having a finite size? Can \(a\) be interpreted as the size of the atom, beyond which there is nothing? Or is there any limit on how far the electron can be from the proton? 31. Use physical reasoning to explain how the equation for the energy levels of hydrogen, should be generalized to the case of an atom with atomic number \(Z\) that has had all its electrons removed except for one. 32. A muon is a subatomic particle that acts exactly like an electron except that its mass is 207 times greater. Muons can be created by cosmic rays, and it can happen that one of an atom's electrons is displaced by a muon, forming a muonic atom. If this happens to a hydrogen atom, the resulting system consists simply of a proton plus a muon. (a) Based on the results of section 13.4.4, how would the size of a muonic hydrogen atom in its ground state compare with the size of the normal atom? (b) If you were searching for muonic atoms in the sun or in the earth's atmosphere by spectroscopy, in what part of the electromagnetic spectrum would you expect to find the absorption lines? 33. A photon collides with an electron and rebounds from the collision at 180 degrees, i.e., going back along the path on which it came. The rebounding photon has a different energy, and therefore a different frequency and wavelength. Show that, based on conservation of energy and momentum, the difference between the photon's initial and final wavelengths must be \(2h/mc\), where \(m\) is the mass of the electron. The experimental verification of this type of “pool-ball” behavior by Arthur Compton in 1923 was taken as definitive proof of the particle nature of light. Note that we're not making any nonrelativistic approximations. To keep the algebra simple, you should use natural units --- in fact, it's a good idea to use even-more-natural-than-natural units, in which we have not just \(c=1\) but also \(h=1\), and \(m=1\) for the mass of the electron. You'll also probably want to use the relativistic relationship \(E^2-p^2=m^2\), which becomes \(E^2-p^2=1\) for the energy and momentum of the electron in these units. 34. Generalize the result of problem 33 to the case where the photon bounces off at an angle other than 180° with respect to its initial direction of motion. 35. On page 875 we derived an expression for the probability that a particle would tunnel through a rectangular barrier, i.e., a region in which the interaction energy \(U(x)\) has a graph that looks like a rectangle. Generalize this to a barrier of any shape. [Hints: First try generalizing to two rectangular barriers in a row, and then use a series of rectangular barriers to approximate the actual curve of an arbitrary function \(U(x)\). Note that the width and height of the barrier in the original equation occur in such a way that all that matters is the area under the \(U\)-versus-\(x\) curve. Show that this is still true for a series of rectangular barriers, and generalize using an integral.] If you had done this calculation in the 1930's you could have become a famous physicist. 36. Show that the wavefunction given in problem 30 is properly normalized. 37. Show that a wavefunction of the form \(\Psi = e^{by} \sin ax \) is a possible solution of the Schrödinger equation in two dimensions, with a constant potential. Can we tell whether it would apply to a classically allowed region, or a classically forbidden one? 38. Find the energy levels of a particle in a three-dimensional rectangular box with sides of length \(a\), \(b\), and \(c\).(answer check available at 39. Americium-241 is an artificial isotope used in smoke detectors. It undergoes alpha decay, with a half-life of 432 years. As discussed in example 18 on page 876, alpha decay can be understood as a tunneling process, and although the barrier is not rectangular in shape, the equation for the tunneling probability on page 876 can still be used as a rough guide to our thinking. For americium-241, the tunneling probability is about \(1\times10^{-29}\). Suppose that this nucleus were to decay by emitting a tritium (helium-3) nucleus instead of an alpha particle (helium-4). Estimate the relevant tunneling probability, assuming that the total energy \(E\) remains the same. This higher probability is contrary to the empirical observation that this nucleus is not observed to decay by tritium emission with any significant probability, and in general tritium emission is almost unknown in nature; this is mainly because the tritium nucleus is far less stable than the helium-4 nucleus, and the difference in binding energy reduces the energy available for the decay. 40. As far as we know, the mass of the photon is zero. However, it's not possible to prove by experiments that anything is zero; all we can do is put an upper limit on the number. As of 2008, the best experimental upper limit on the mass of the photon is about \(1\times 10^{-52}\) kg. Suppose that the photon's mass really isn't zero, and that the value is at the top of the range that is consistent with the present experimental evidence. In this case, the \(c\) occurring in relativity would no longer be interpreted as the speed of light. As with material particles, the speed \(v\) of a photon would depend on its energy, and could never be as great as \(c\). Estimate the relative size \((c-v)/c\) of the discrepancy in speed, in the case of a photon with a frequency of 1 kHz, lying in the very low frequency radio range. \hwans{hwans:photon-mass} 41. Hydrogen is the only element whose energy levels can be expressed exactly in an equation. Calculate the ratio \(\lambda_E/\lambda_F\) of the wavelengths of the transitions labeled E and F in problem 25 on p. 904. Express your answer as an exact fraction, not a decimal approximation. In an experiment in which atomic wavelengths are being measured, this ratio provides a natural, stringent check on the precision of the results.(answer check available at 42. Give a numerical comparison of the number of photons per second emitted by a hundred-watt FM radio transmitter and a hundred-watt lightbulb.(answer check available at 43. On pp. 890-891 of subsection 13.4.4, we used simple algebra to derive an approximate expression for the energies of states in hydrogen, without having to explicitly solve the Schrödinger equation. As input to the calculation, we used the the proportionality \(U \propto r^{-1}\), which is a characteristic of the electrical interaction. The result for the energy of the \(n\)th standing wave pattern was \(E_n \propto n^{-2}\). There are other systems of physical interest in which we have \(U \propto r^k\) for values of \(k\) besides \(-1\). Problem 23 discusses the ground state of the harmonic oscillator, with \(k=2\) (and a positive constant of proportionality). In particle physics, systems called charmonium and bottomonium are made out of pairs of subatomic particles called quarks, which interact according to \(k=1\), i.e., a force that is independent of distance. (Here we have a positive constant of proportionality, and \(r>0\) by definition. The motion turns out not to be too relativistic, so the Schrödinger equation is a reasonable approximation.) The figure shows actual energy levels for these three systems, drawn with different energy scales so that they can all be shown side by side. The sequence of energies in hydrogen approaches a limit, which is the energy required to ionize the atom. In charmonium, only the first three levels are known.8 Generalize the method used for \(k=-1\) to any value of \(k\), and find the exponent \(j\) in the resulting proportionality \(E_n \propto n^j\). Compare the theoretical calculation with the behavior of the actual energies shown in the figure. Comment on the limit \(k\rightarrow\infty\). (answer check available at 44. The electron, proton, and neutron were discovered, respectively, in 1897, 1919, and 1932. The neutron was late to the party, and some physicists felt that it was unnecessary to consider it as fundamental. Maybe it could be explained as simply a proton with an electron trapped inside it. The charges would cancel out, giving the composite particle the correct neutral charge, and the masses at least approximately made sense (a neutron is heavier than a proton). (a) Given that the diameter of a proton is on the order of \(10^{-15}\ \text{m}\), use the Heisenberg uncertainty principle to estimate the trapped electron's minimum momentum.(answer check available at (b) Find the electron's minimum kinetic energy.(answer check available at (c) Show via \(E=mc^2\) that the proposed explanation fails, because the contribution to the neutron's mass from the electron's kinetic energy would be many orders of magnitude too large. 45. Suppose that an electron, in one dimension, is confined to a certain region of space so that its wavefunction is given by \[\begin{equation*} \Psi = \begin{cases} 0 & \text{if } x\lt0 \\ A \sin(2\pi x/L) & \text{if } 0\le x\le L \\ 0 & \text{if } x>L \end{cases} \end{equation*}\] Determine the constant \(A\) from normalization.(answer check available at 46. In the following, \(x\) and \(y\) are variables, while \(u\) and \(v\) are constants. Compute (a) \(\partial(ux\ln (vy))/\partial x\), (b) \(\partial(ux\ln (vy))/\partial y\).(answer check available at 47. (a) A radio transmitter radiates power \(P\) in all directions, so that the energy spreads out spherically. Find the energy density at a distance \(r\).(answer check available at (b) Let the wavelength be \(\lambda\). As described in example 8 on p. 848, find the number of photons in a volume \(\lambda^3\) at this distance \(r\).(answer check available at (c) For a 1000 kHz AM radio transmitting station, assuming reasonable values of \(P\) and \(r\), verify, as claimed in the example, that the result from part b is very large. Exercise A: Quantum Versus Classical Randomness 1. Imagine the classical version of the particle in a one-dimensional box. Suppose you insert the particle in the box and give it a known, predetermined energy, but a random initial position and a random direction of motion. You then pick a random later moment in time to see where it is. Sketch the resulting probability distribution by shading on top of a line segment. Does the probability distribution depend on energy? 2. Do similar sketches for the first few energy levels of the quantum mechanical particle in a box, and compare with 1. 3. Do the same thing as in 1, but for a classical hydrogen atom in two dimensions, which acts just like a miniature solar system. Assume you're always starting out with the same fixed values of energy and angular momentum, but a position and direction of motion that are otherwise random. Do this for \(L=0\), and compare with a real \(L=0\) probability distribution for the hydrogen atom. 4. Repeat 3 for a nonzero value of \(L\), say L=\(\hbar\). 5. Summarize: Are the classical probability distributions accurate? What qualitative features are possessed by the classical diagrams but not by the quantum mechanical ones, or vice-versa? [1] This is under the assumption that all the uranium atoms were created at the same time. In reality, we have only a general idea of the processes that might have created the heavy elements in the nebula from which our solar system condensed. Some portion of them may have come from nuclear reactions in supernova explosions in that particular nebula, but some may have come from previous supernova explosions throughout our galaxy, or from exotic events like collisions of white dwarf stars. [2] What I'm presenting in this chapter is a simplified explanation of how the photon could have been discovered. The actual history is more complex. Max Planck (1858-1947) began the photon saga with a theoretical investigation of the spectrum of light emitted by a hot, glowing object. He introduced quantization of the energy of light waves, in multiples of \(hf\), purely as a mathematical trick that happened to produce the right results. Planck did not believe that his procedure could have any physical significance. In his 1905 paper Einstein took Planck's quantization as a description of reality, and applied it to various theoretical and experimental puzzles, including the photoelectric effect. Millikan then subjected Einstein's ideas to a series of rigorous experimental tests. Although his results matched Einstein's predictions perfectly, Millikan was skeptical about photons, and his papers conspicuously omit any reference to them. Only in his autobiography did Millikan rewrite history and claim that he had given experimental proof for photons. [3] But note that along the way, we had to make two crucial assumptions: that the wave was sinusoidal, and that it was a plane wave. These assumptions will not prevent us from describing examples such as double-slit diffraction, in which the wave is approximately sinusoidal within some sufficiently small region such as one pixel of a camera's imaging chip. Nevertheless, these issues turn out to be symptoms of deeper problems, beyond the scope of this book, involving the way in which relativity and quantum mechanics should be combined. As a taste of the ideas involved, consider what happens when a photon is reflected from a conducting surface, as in example 23 on p. 703, so that the electric field at the surface is zero, but the magnetic field isn't. The superposition is a standing wave, not a plane wave, so \(|\mathbf{E}|=c|\mathbf{B}|\) need not hold, and doesn't. A detector's probability of detecting a photon near the surface could be zero if the detector sensed electric fields, but nonzero if it sensed magnetism. It doesn't make sense to say that either of these is the probability that the photon “was really there.” [4] This interpretation of quantum mechanics is called the Copenhagen interpretation, because it was originally developed by a school of physicists centered in Copenhagen and led by Niels Bohr. [5] This interpretation, known as the many-worlds interpretation, was developed by Hugh Everett in 1957. [6] See page 895 for a note about the two different systems of notations that are used for quantum numbers. [7] After f, the series continues in alphabetical order. In nuclei that are spinning rapidly enough that they are almost breaking apart, individual protons and neutrons can be stirred up to \(\ell\) values as high as 7, which is j. [8] See Barnes et al., “The XYZs of Charmonium at BES,” To avoid complication, the levels shown are only those in the group known for historical reasons as the \(\Psi\) and \(J/\Psi\).
f674348e08498239
Return to Recommended Reading Page                                              Arches A Review of Irving Stein's The Concept of Object as the Foundation of Physics by Doug Renselle Related Links: Irving Stein's Works Page (Stein on Philosophy, Physics, and Economics.) Stein's Special Relativity - A Critique (Stein uncloaks some absent, unstated or problematic assumptions by Einstein, et al.) Dr. Stein email to Doug 10May2000 (Stein comments on our review of his book.) See note on: The Typical Path of a Quantum Object. (Note added 12Nov1998 PDR) See correction Buridan Not a Sophist (Correction added 10Feb1999 PDR) My perspective of Irving Stein's, The Concept of Object as the Foundation of Physics, _____________________________________________________________________-Volume 6, of the San Francisco State University Series in Philosophy _____________________________________________________________________-1996, Peter Lang Publishing hardbound, 100 pages. - Doug Renselle (See revision history at page bottom.) Classical Object précis Special Relativistic Object précis Classical Random Walk Object précis Quantum Schrödinger Object précis Quantum Dirac Object précis Nonspace and Measurement Summary and Exegesis Ontology I Ontology II Issues Which Arose During the Review Reviewer Comments on Schrödinger Object End of Review  " ...the ontology of physics is not classical." Irving Stein, Page 79, The Concept of Object as the Foundation of Physics. Reader, please be aware that Stein's work is nontrivial. It is a long review. You do not have to read the whole thing to review the book. Here is a list of links which will make it easier for you to access segments of the review, or you may choose to read it all from the top. We have added randomly placed pop-up menus to highlight both Doug's and Dr. Stein's thelogos. Pop-ups suggest that the can be replaced by nothing (when it's a wasted the), an article, a noun, a possessive, etc. Reading this review is different than reading Lila review or our review of Deutsch's Fabric of Reality. Stein's book is tough going unless you have some foundation in quantum science. But compared to reading books about quantum mechanics, this one is unique. It is short and sweet. It abounds resonance with Pirsig's three works: Zen and the Art of Motorcycle Maintenance (ZMM), Lila, and Subjects, Objects, Data and Values (SODV). It tries to retain a Subject-Object Metaphysics (SOM) object-based ontology for physics, but Stein admittedly fails—we sense—almost with glee. This review covers a lot of ground. Despite reviewer efforts there may be flaws or misinterpretations. We will repair those as they attain actuality in the reviewer's space. Also, during the review, we made sketches of what we saw Stein describing. That artwork awaits quality rendition. Simply, this review must for foreseeable future be a living document, with imminent corrections and artwork additions as they attain actuality here. When Doug uses a phrase "quantum science," his use of 'science' is n¤t classical. See Doug's Quantum English Language Remediation of 'science.' Doug's omnique usage of quantum~juncture in this review is to narrate what Doug intends by a almost pneumatic evolution of self from CTMs to QTMs, from hylic-psychic to nearer pneumatic. See topos. Doug - 3Jul2010. Watch for announced changes to this review in the future. Let's begin with an abstract of the review... Abstract: Starting from a carefully laid classical object foundation, Irving Stein uses an object mutation process to incrementally evolve a sequence of ontologies, using familiar classical and quantum science terminology, thus: Classical ontology  object is an analytic time function in a space- and time-independent plenum; object may travel at unlimited speeds and accelerations; mass is an incoherent object property Special relativity object object is a fixed step-length preferential random walk in a speed-limited space-time identity; defined speed distinguishes from classical ontology Random walk object object is prequantum, introduces length proxy for mass Quantum ontology-Schrödinger object is a variable step-length nonpreferential random walk Quantum ontology-Dirac object is time-reversal-step nonpreferential walk, time is proxy for mass Space-nonspace ontology object dissolves into a measurement-quantized space-nonspace entity; space aspect appears classical and obeys classical rules, nonspace aspect is nonapparently unreal and obeys quantum rules; space-nonspace quantum measurement creates space from nonspace Caveat: the above table contents are exceptionally oversimplified. Stein tells us that classical physics is both exegetic (explainable) and exoteric (public). What he means is that most members of Western culture, elite and non-elite, have more than a vague understanding of our classical Newtonian ontology. Why? Because there is an ontology for classical physics—the Newtonian ontology—it exists. It derives from our SOM culture born more than 25 centuries past. Juxtapose that to what Richard P. Feynman says about quantum science, "No one understands it." Why? Stein concludes we have no ontology for quantum science. Only a few scientific elite even begin to fathom depths of quantum science. Without an ontology, quantum science is neither exegetic nor exoteric. Without an ontology, no one can understand quantum science. Stein's goal is to remedy that problem. His goal is to derive a new object ontology for quantum science. Stein achieves his goal of a new ontology for quantum physics. Stein's purpose: "...it is purpose of this work only to give a reasonable, coherent definition of concept of object, it will be seen that the theories of relativity and quantum physics arise out of the unfolding of the concept of object presented here. In end, the concept of object itself is found not to be absolutely basic and dissolves into concept of what I call nonspace, which is found to be the fundamental ontology." Page xvi. (Our red italics.) Stein's claimed results: "What is claimed in this work is that an ontology has been laid out for physics, at least for a one-dimensional, non-interacting physics. By "ontology" is meant the origin in reality of the results of measurements. This is done not by starting out with a set of hypotheses or axioms, but by attempting to define most basic concepts that we have in discovering the world around us, namely those of object and measurement." Page 14. Stein arrives at a new ontology which we can summarize thus, side-by-side with Pirsig's Metaphysics of Quality (MoQ) (Pirsig terms: ü Reviewer's extensions to Pirsig's MoQ: ) (v means, "in interrelationships with") ( means quantum-included-middle, non-Aristotelian-excluded-middle, Planck rate change 'equals.') Figure 1 - Comparison of Stein's Ontology to Pirsig's MoQ Stein's Ontology Pirsig's MoQ Reality = the concept of space üReality Quality Reality = nonspacevspace üReality Dynamic QualityvStatic Quality Nonspace is conceptual üDynamic Quality (DQ) is nonconceptual Space is conceptual üStatic Quality (SQ) is conceptual Assumes nonspace preexists üAssumes Dynamic Quality preexists Space arises from nonspace üStatic Quality arises from Dynamic Quality Measurement creates space üQuality Events (QEs) create Static Quality Measurement entails change of state QEs entail change of state Measured systems may remain isolated QEs entangle Static Patterns of Value (SPoVs) Measured systems may interact SPoVs interrelate with both SQ and DQ Interactions transfer energy Energy transfer is an interrelationship Interacting systems are not isolated üContextual SPoVs interrelate via DQ Object interaction in space entails energy transfer QEs may entail energy transfer All objects interact (at least gravitationally)  üAll SPoVs interrelate with both SQ and DQ Reality is quantum-objective üReality is unified, i.e., subject-QUALITY-object All things are objects üAll things are SPoVs Phenomena arise from object interactions üKnown Phenomena are SPoVs Objects define space üSPoVs define SQ Nonspace is unlimited possibilities üDQ is undefinable, unknown Space is actuality üSQ is the known Measurement excludes awareness QEs may be aware (In Quantonics, they are!) Does not explain discreation Does not explain discreation Note that we depict Pirsig's MoQ in Figure 1 using some classical SOM terminology, to keep a modicum of compatibility with Stein's objective bent. For example we show subject-Quality-object in SOM fashion. In pure MoQese Quality is both DQ and SQ with DQ surrounding SQ and SQ representing SPoVs which unify SOM's concepts of subject and object. We assume some of people reading this review may not be well-steeped in MoQese. Return to the Review Outline Next is an overview of what our review will cover: Overview: Irving Stein, in his 1996 book, The Concept of Object as the Foundation of Physics, offers a distinctive philosophical contrast to a hypothetical work which might have been titled, The Concept of Flux as the Foundation of Physics. The old, classical subject-object metaphysical (SOM) battle between Christiaan Huygens and Isaac Newton clearly still rages on, despite the fact that modern quantum science unified their hard-fought diametrical positions early in the 20th century. Huygens said photons were waves. Newton said they were particles. Those with an objective bent stayed in Newton's camp. Those with a subjective bent followed Huygens. Those with a holistic bent followed the physical-mystic founders of quantum theory. Stein, just like Einstein, Bohm, Albert, Deutsch, et al., is still mired in objectivist encampment. Yet, Stein approaches the holistic realm in spite of himself. He arrives at a quantum ontology so close to Pirsig's Metaphysics of Quality (MoQ) that we are just awed by his brilliance, despite its objective antecedents. When you read this book you will be amazed at Stein's incredible job of re-deriving the current quantum wave ontology "objectively" using random walks with fluxing step lengths. That he can do this at all, in the reviewer's opinion, affirms Niels Bohr's insistence on the complementarity of particle and wave. As Nick Herbert, et al., told us, particle and wave are complementary, co-defining conjugates. Stein chooses the objective context where a majority (Doug had a weaker understanding of quantum physics in 1998. He should have used 'minority' here. One must also be aware how Stein's random walks are really wave proxies. Random walks perceived dynamically are quantum~flux proxies, which is what makes Stein's approach here so delectable. Doug - 28Aug2007.) of other physicists choose its wave complement perspective. You will enjoy reading Stein's fresh, grounded, overall approach and the uncanny resemblance of his resulting quantum object ontology to Pirsig's MoQ. He shows us that you may, if you wish, keep one foot in the classical legacy while garnering a modicum quantum object ontology. The price you may pay for retaining your classical predilections is small. You may relinquish any chance of understanding non-objective phenomena. We know that is a serious problem with the legacy SOM we inherit from Aristotle. If your ontology is SOM, it is difficult—almost impossible—to conceptualize non-objective phenomena. Stein thinks his new ontology will help you do both. Let's start with Stein's approach. Sequentially, he develops and evolves five distinct object models, each progressively more comprehensive than former. Each model depends upon the former model's results, but each model stands pretty much alone in its ontological precepts. We already know this, but Stein makes it ever so clear that our ontological or our metaphysical model depends more than anything on the most primal assumptions we use for developing it. Our assumptions determine our metaphysics. His five models are: 1. Classical Object 2. Relativistic Object 3. Classical Random Walk Object 4. Quantum Object (Schrödinger) 5. Quantum Object (Dirac) After he finishes developing his most general Dirac Quantum Object, Stein provides us with a powerful and useful treatise on one of remaining major problems in quantum science: quantum measurement. In the process of developing the five progressive object models, Stein leaves the reader with a plethora of interpretations of being and becoming. Kindly he does some house keeping and defines terms in a chapter called: Finally, he provides us with two ontologies for evaluation: 1. Ontology I  (What Stein discovered in this work.) 2. Ontology II (Stein's tentative answer to, "What is Reality?") Stein insists that his new model of a quantum ontology is objective, yet midway in his incremental development of the model, he introduces a wave mime without acknowledging it as such. Of course most quantum scientists acknowledge the dual nature of quantum reality as particle-wave, but Stein uses objective blinders to develop a space-nonspace quasi-equivalent metaphor. We can only conclude that Stein, steeped in SOM yet impressed with incredible success of quantum mechanics, is making a penultimate effort to retain the objective bases of science. Acknowledging Feynman's assessment that, "No one understands quantum mechanics," Stein's book will help you to commence the process of learning much of what the scientific elite know about this enormous topic. The book is short, tight, dense, yet clear and perspicuous. It bursts with insights and breakthroughs unique to Stein, some of which may become famous. Skip the math, if that is a challenge for you, and just accept Stein's results, which he rightfully and proudly claims. Stein describes his mathematical results very well in plain language. Try to understand the big picture he attempts to describe. We wish to help you with that in this review using both words and some simple but limited artwork. We also point out where Stein's ontology differs from our perspective of Pirsig's more holistic MoQ. One of best ways we can help you is with a list of term definitions. You will need to cognize these well if you are to grasp the essence of his new ontology. Our remarks in this review of Stein's book are mixed in favor and flavor. We think it is a great work, but the book centers on a Newtonian particulate archetype of reality, which we find partial, misleading, and biased from 2500 years of Western SOM culture. Stein's insistence on a proemial objective discovery of reality is a cleaved grounding for his new quantum ontology. To Stein's credit, he acknowledges that the results of his work point unambiguously at a non-objective reality he calls "nonspace," and both reviewer and reader alike may have to acknowledge his work offers a kind of missing half. We have wave function derived quantum mechanics. Stein adds his own, fresh, new particle/object derived quantum mechanics. We point out areas of Stein's work where we think you may need to be wary under our section titled, Issues. Be careful in drawing your own ontological/metaphysical views from Stein's work alone. We lean toward MoQ. Stein's intellectual stream runs deep in the SOM chasm. But admittedly, we too are biased. Keep that in mind as you read the review. We hope you read our remarks as constructive and not derogatory toward Stein's powerful efforts. Stein's book taught us much which was just fog before. May his work do same for you. Return to the Review Outline Next are some definitions for the review: Definitions: Before we begin the review, we need to define a few crucial terms just in case you are unfamiliar with them. If you do not understand Stein's jargon, he will leave you bewildered. His breakthroughs are enough to contemplate on their own, without the nuisance of absent or fuzzy semantics. Definitions - Irving Stein's classical use of terms: That branch of metaphysics which studies the nature of being and becoming. In other words, using Stein's object ontology, how does an object arise/become, and once in existence, what is the nature of its being in existence. (He does not speak of it, but you should consider the inverse or perhaps the complement of becoming.)  Stein, "The origin in reality of results of measurements." Page 14. space Actuality. measurement Objects arise from nonspace by measurements on nonspace. Note: Stein offers at least 72 distinct statements on measurement in this text! Non-actuality. Everything that is not space. Here is a reviewer list of currently known metaphors of Stein's nonspace: 1. conceptually unknown 2. the unknown 3. DQ (Pirsig) 4. possibilities 5. pure state 6. superposition (a confusing term) 7. unactualized reality 8. undifferentiated aesthetic continuum (Northrop) 9. undifferentiated reality 10. unmeasured phenomenal objects (Pirsig on Bohr in SODV) 11. unspecifiable manifestations (Polyani, esp. see his The Study of Man) 12. free energy 13. vacuum space 14. VES (vacuum energy space) 15. QVE (quantum vacuum energy) 16. white hole (coinage attributed to Jack L Chalker's (1944-2005) Well World series of books) 17. ZPE (zero point energy) coherent Logically/epistemologically understandable in an Aristotelian sense. consistent Always states the truth in an unlimited context. complete States all truths in an unlimited context. Synonym - absolute. epistemology That branch of philosophy which considers the source and scope of human knowledge. dimension Stein assumes a one dimensional object modeling environment for simplicity. Return to the Review Outline Next is our review: Allow reviewer to walk you through, in summary form, Stein's evolution of a new quantum science ontology starting from his assumption of an existing classical object ontology. As you read this summary and the subsequent review detail, be mindful that Stein uses his only known, single tactic to keep his work simple enough (again, exegetic and exoteric) to accomplish his goal of a new ontology—he must limit his assumptions thus: one dimensional non-interacting objects. We concur with his approach and see his work as clearly extensible. After our review we present various issues for you, our reader, to consider. Our review provides a précis on each evolutionary step of Stein's object evolution, plus individual reviews of his chapters on Nonspace and Measurement, Summary and Exegesis, Ontology I, and Ontology II: Return to the Review Outline Classical Object précis: Stein gives us an unambiguous depiction of the classical object. The current classical object ontology is: an analytic function of time is the ontological basis for classical mechanics. The current classical object ontology is incoherent! It is incoherent because the classical object is incoherent. Newton gave us an object which is essentially an impossible entity. We require the Newtonian classical object, NCO, to exist in space and time (a plenum), and have the property of mass. We also require space and time to be independent concepts, thus in Newtonian reality, classical concepts of mass, length, and time are autonomous ideas. Stein shows us this classical ontology is impossible. It spawns familiar lemming SOM detritus like analyticity, independent space and time outside of NCOs, determinism, infinite rates (velocity, acceleration, etc.), continuous reality, past-present-future real time line, reversibility, everywhere existence of an object in past-present-future, inability to conceptualize change except by continuous and infinite derivatives, the paradox of simultaneous NCO independence and interaction (gravitation, et al.), etc. Stein tells us that the restrictions which the NCO ontology places on physical reality self-conflict. Reader, we hope this provokes you well. If you want to see the urgent need for a new ontology, study Stein's chapter on the classical object. Return to the Review Outline Special Relativistic Object précis: The current special relativity ontology is: a space-time identity is the ontological basis of special relativity.  Einstein brought us his special relativity, SR, and its adherence to Lorentz invariance. But SR is classical, may we say purely classical, and still incoherent, per the classical object paragraph above. However, the SR object, SRO, gives Stein an incremental segment of his evolution from a classical ontology toward a new ontology. Stein also gives us a freebie in process: for the first time, he explains why there is an upper limit to the speed of all actual objects, i.e., objects in space-time. As we said in the previous paragraph, the derivatives of NCO functions of time are unlimited. Thus NCO objects may travel at infinite velocities and accelerations. Stein tells us bluntly that if an NCO's velocity is undefined, the NCO does not exist. This becomes apparent the more you think about it. Stein tells us that an SRO like an NCO must be analytic, but that its speed must be limited. He derives this using a simple random walk object, of fixed step length(s), and binomial choice (direction preference) at the outset of each step. Definable speed of an SRO distinguishes it from a NCO. In his prescient work here, Stein discovers that the constancy of velocity across different reference frames in special relativity is not a requirement, but a consequence of the identity of space and time For us, in Pirsig's MoQ and Quantonics, the space-time identity is a crucial axiom of our philosophical foundation. (We mean 'identity' in a non-classical sense. As we have shown elsewhere classical identity is an impossibility, just as Stein so eloquently shows us classical objects are impossible. Our Quantonics' quantum identity is that all of physics' measurables are (indeed, all reality is) functions of quantum flux. I.e. masslengthtimegravityf(flux). Note that classical 'identity' is Newtonian "homogeneous," and "quantitative/analytical." Implication? Classically, there is homologically/unilogically, conceptually/perceptually one 'time,' one 'mass,' one 'length,' one 'gravity,' all in one global context (OGC) reigned by one global truth (OGT) system. By comparison, quantum identity is Bergsonian "heterogeneous," and "qualitative/stochastic." Implication? Quantumly/Quantonically, there are heterologically/paralogically, quantons of many 'times,' many 'masses,' many 'lengths,' many 'gravities,' all in many quantum islands of truth reigned quantum-locally/nonlocally-separably/nonseparably-subluminally/superluminally by many sophist, Planck rate recursive, interrelating/compenetrating/commingling contextual systems.) Return to the Review Outline Classical Random Walk Object précis: The classical random walk object, RWO, introduces concept of non-zero step length to the old classical ontology, and bridges from that ontology to the new quantum ontology. Doing this allows us to gain defined or maximum speed for our developing ontology, and eliminate that particular incoherency. For this overall gain in ontological coherency we trade (lose) both classical analyticity and our former classical ontology. We get a new RWO and define a proxy for mass in terms of step length. It is here, in the development of Stein's RWO, that we begin the conceptual adventure of opening our classical blinders to a new realm, a new paradigm: the pre-quantum realm. But the RWO introduces a new problem. Since the random walk steps are arbitrarily plus or minus, and the step lengths are (for now) constant, the average velocity of any RWO is always zero for all objects. This is unreasonable, and Stein fixes the problem in his derivation of the quantum object. Return to the Review Outline Quantum Schrödinger Object précis: Stein gives us a new depiction of his evolved quantum object. Now, here perspicacious reader, we ask you gently to put on your quantum hat. If you have done this naught before, it may require a tad of faith. This is the quantum juncture! This is the point, once past and its meaning fully grasped, from which there is no return. If you have not been here, from this mark onward your life and perceptions will be forever changed. For reviewer, my own first experience of this quantum juncture was one of epiphany and awe. May yours be also. Stein asks you to first understand two terms: preference and nonpreference. In the classical random walk object, at each step of the walk, a decision (a SOM either/or choice) has to be made. In our one dimensional modeling environment, the object may walk in either the positive/right direction or the negative/left direction. That is classical, SOM, either/or thinking. He asks you to enter the new paradigm of quantum thinking, and permit your mind to know the quantum object moves in both positive and negative directions simultaneously. The quantum object is in both locations simultaneously. Quantum thinking is both/and thinking. Classical objects require direction preference. Quantum objects are directionally/locationally nonpreferential. The quantum object does its nonpreferential random Chautauqua in nonspace (more on this term below). If you ask the question, "How can a classical object take a nonpreferential step?" we find paradox in SOM. But when, by a first act of faith, we move to the quantum realm, we eliminate the paradox! Stein describes this act of faith thus, "It is here...in the resolution of this paradox, that we fortuitously turn our backs on classical physics [SOM] and take the leap into quantum mechanics, from an object defined by either an analytic or random walk function to an entirely different kind of object." Page 58, section 55. Wow! Epiphany! Awe! He tells us we must now let go of our cherished classical object as a spatial function of time. And here we see his subtle reference to Buridan, "[whoever] wrote about Buridan's Ass starving midway between two identical bales of hay had insight some of the rest of us did not yet have." Page 58, section 55. Here is a very important tie to Pirsig's work. Buridan was a 14th century philosopher who amazingly adhered sophism after nearly 2000 years of virulent philosophical abuse. Buridan was the only practicing sophist philosopher the reviewer knows about, subsequent to SOM onslaught and its extreme denigration of sophism and sophists starting about 2500 years ago. [Correction: Since this paragraph was written, the reviewer subsequently reviewed G.E Hughes' John Buridan on Self-Reference, Chapter Eight, Sophismata, of Buridan's, Summulae de Dialectica. In this subsequent review, I discovered Buridan was not a sophist! He was an enthusiastic student of sophism. But his philosophy was Aristotelian Buridan was a died-in-the-wool SOMite of the first magnitude. Buridan's Sophismata was not about the goodness of sophism, but about its evils from a SOM perspective. He proceeded to use SOM formal logic to 'prove' that all sophisms are, "false." You, reader, will be interested to know that Buridan would have called quantum science, "sophistry," with denigration intended.] The interesting part of the Pirsig connection here is how he talks about the Birth of SOM (our phrase, re: chapter 29) in ZMM, and further in that same work how he queries, "What would the outcome have been?" if sophism won over SOM. Stein is telling us, indirectly, that sophism is kin to modern quantum science! Bravo! We agree! Pirsig (as he told us) was right all along!  The sophists were closer to quantum excellence than the naïve SOMites could ever perceive. So, from the reviewer's perspective, sophism was placed on hiatus only to be resurrected and extended in modern quantum science. Next month, in November, 1998 we review some of Buridan's work, connections to it, and others' assessments of it. Stein shows us that quantum nonpreference from a SOM perspective is a sophism, a paradox. SOM was partly right. In quantum reality quantum nonpreference is still a sophism, however, there is nothing paradoxical about it. Pirsig makes this clear in his metaphysical descriptions of reality in his three works. Stein has more work to do though. He must introduce a new concept: nonspace. Now we have two quantum subrealms: space and nonspace. Space is where our perceived, actual world exists, but now Stein tells us that quantum objects can be in space and/or nonspace. He shows us how actualized quantum objects in space arise from nonactual quantum objects in nonspace. What causes them to arise? A quantum interpreted classical concept called measurement. An actual quantum object may arise from a nonactual quantum object, momentarily, when something measures the nonactual quantum object. The quantum object ontology says that becoming is when an actual quantum object arises from a nonactual quantum object. Becoming is the ontological transformation of nonspatial quantum objects into spatial quantum objects. Being is the ontological experience of actual quantum objects in space, affected by conditions both in space and nonspace. Space appears as a classically objective SOM facade to us (It appears as a facade because SOM denies the existence of, and/or cannot classify the concept of nonspace.). It appears as Reality, but is just one of infinite pseudo realities. SOM is literally a false ontology, because it incoherently explains and publicizes the nature of being (ontology). By-the-way reader, the implication is same for all of the SOM ISMs, too. But quantum science and Stein's quantum ontology tell us this new ontology is not the quantum model of Reality until we include nonspace and the ontological quantum transformations twixt space and nonspace. Be keenly aware that the reviewer is vastly oversimplifying this. But Stein is too, less so, because his purpose is to develop an exegetic and exoteric ontology for all of us. Bravely, and nobly Stein tells us we must have a new ontology (e.g., Stein's, Pirsig's, et al.) for the new millennium if we are to survive the imminent huge and rapid changes born on the quantum Chautauqua paradigm which took its first nonpreferential step over one hundred years ago at the end of the 19th century. Stein is emphatic, "...the 'nonpreference' walk described here is the ontological basis of quantum mechanics." Additional reviewer comments on the Quantum Schrödinger Object Return to the Review Outline Quantum Dirac Object précis: Let us keep this incremental evolution of the quantum object simple. Stein extends the Schrödinger quantum object to make it relativistic, that is it. The ontology pretty much remains the same as discussed under the Schrödinger quantum object paragraph. To achieve the Dirac relativistic quantum object, Stein re-interprets the random walk as a sequence of time-reversal steps instead of as sequence of nonpreferential length steps. Having done this, removing length from the random walk, Stein loses length as the proxy for mass. However, he goes on to show (on page 71, section 70) that time is a proxy for mass. So Stein evolves a nonpreferential time-reversal random walk as the Diracian relativistic ontological basis of quantum mechanics. Return to Review Outline That ends the précis list of Stein's quantum object evolution. Now we review the last four chapters of the book, one at a time: Nonspace and Measurement: In the reviewer's opinion, Stein's stubborn refusal to use the term 'complementary' makes his chapter on Nonspace and Measurement difficult to read and understand. He talks about points in space and nonspace as though they are complementary, but does not say so. The reader is left to somehow see points (loci) in space and nonspace as conjugal or some other adjective for a relationship. In several instances Stein describes points in space and nonspace as though they are indeed conjugal or complementary, but he does not say it thus. His reason, we believe, is that those terms (might) take us back into a non-object-based theory of quantum reality. In the reviewer's opinion, if the points in space and nonspace are conjugal, just say so. Make it an axiom. Stein insists on using the phrase, 'classical object,' for actualized quantum objects. Remember, we said that actualized quantum objects transform from nonspace to space. We do not like the continued use of this phrase, mainly because Stein makes strong negative remarks about the metaphor of, 'classical object,' being outright wrong. As a result the remaining chapters in the book are, in the reviewer's opinion, confusing because Stein intermixes the terms object, quantum object, and classical object at his apparent whim. To alleviate this problem for you the reader, please bear with this reviewer and allow me to use two simple notations: AQO (Actualized Quantum Object AKA classical object, which Stein tells us confusingly is also a quantum object), and NQO (Nonactualized Quantum Object AKA quantum object). So AQOs and NQOs are both quantum objects (QOs). Next, instead of reviewing the chapter in prose, for each of the following list of terms allow me to list some Steinian 'axioms' to aid your understanding of nonspace and measurement: QO axioms (QO Quantum Object) 1. both AQOs and NQOs are QOs AQO axioms (AQO = Actualized Quantum Object = classical object) 1. an AQO is a point, one locus in space; an AQO is at one point in space 2. AQOs have the property of space 3. AQOs do not have the property of nonspace 4. AQOs move totally nonpreferentially in space 5. over passing time an AQO will find itself, at any given moment, at all possible loci in space (It is now 9Jan2007, and we realize, almost epiphanously, Stein has unwittingly described a hologram! Doug needs to take these axioms and upgrade them with Quantonics' innovations in quantum~memetics, qualogos, quantonics flux~script~EIMA~metaphor, semiotics, memeotics, heuristics, and quantum~hermeneutics. Doug.) 6. an AQO's mass restricts its possible loci in space 7. an AQO's mass bounds the average reversal time of each nonpreferential step (necessary to define an AQO) 8. small mass AQO's have longer reversal times; large mass AQO's have shorter reversal times (de Broglie relation; m µ 1/l; i.e., mass is inversely proportional to nonpreferential step length, or—with relativistic space-time identity—mass is inversely proportional to time reversal steps) 9. Def.: an AQO is an NQO of ‘essentially infinite’ mass, therefore 10. an AQO is just a special kind of NQO (in Pirsig's MoQ we say an AQO is a latched portion of an NQO) 11. AQOs stay at one location or move using a trajectory in space NQO axioms (NQO Nonactualized Quantum Object) 1. an NQO is points, loci in nonspace; an NQO is at one or more points in nonspace 2. NQOs have the property of nonspace 3. NQOs do not have the property of space 4. if an NCO's loci were in space, the NQO would be at one of those loci, and if AQOs were at each of those loci, the actualized NQO would be at one of those loci (Stein subtly admits this is unclear. Sections 72 and 73 are extraordinarily difficult for the reviewer. If any reader can clarify, email us and we will revise, with attribution. Stein's apparently inconsistent use of the terms ‘object’ and ‘space’ in these sections aggravates our confusion.) 5. unmeasured NQOs, over time, occupy a continually increasing set of continuous loci in nonspace (vis-à-vis AQO axiom 11) 6. measured NQOs, between measurements, do NQO axiom 5 Nonspace axioms 1. points in nonspace are not points in space 2. nonspace is what its nonspatial loci would be if they were to become spatial loci (In MoQ and Quantonics, we would say this more simply (to us): nonspace and space are complementary.) 3. there is no space in nonspace 4. nonspace is a random walk of non-preferential time reversals in imaginary time, plus a time proxy for mass/energy (See Note tpme) 5. quantum law rules in nonspace between measurements (see dual statement under space) 6. nonspace is the reservoir of all possibilities (see dual statement under space) Note tpme: (added 23Apr99 PDR): Your reviewer finds on other web sites discussions of Vacuum Energy (non)Space. VES' imputed energy density is always some enormous number: ~10^93 grams per cubic centimeter. That energy density says one cubic centimeter of VES can hold about 10^41 of our (known actual/space) universes! Hard to believe... The point here is a calculation for that energy density may use Stein's time proxy. A presumed (I am swagging here) maximum non-preferential time reversal frequency in axiom 4 must be related to Planck's frequency which is about 10^43 alternations per spatial unit reference. Now here comes a real insight of enormous import. How can that much nonspace energy exist and sentients not know about it? Stein tells us nonspace's energy flux is nonpreferential! Its average (so to speak) is zero! Nonspace's energy is isotropic (I do not know the correct word to use here, i.e., a word quantum physicists might choose.)! One more note of interest: apparently a Casimir plenum detects nonspace energy. Search on Casimir. Measurement axioms 1. measurement occurs when an AQO in space restricts the nonpreferential choice of an NQO in nonspace because the AQO in space occupies a spatial location that complements one of the nonpreferential loci in nonspace at the NQO's next time reversal step (Reviewer note: this is not how Stein says it. We interpreted his words on page 77, section 74. We think this is clearer, but you will have to absorb Stein's work before you decide to agree.) 2. Stein extends axiom 1 to the possible more general case of two NQOs co-restricting in nonspace—we quote, "Thus, for any measurements we make, we can say that our classical objects are of effectively infinite mass and in space. Nevertheless, they are not [effectively infinite mass and in space] in actuality. Therefore, the possibility arises that perhaps other than classical objects may make measurements. Perhaps there can be a coincidence of non-classical objects. [Reviewer's note: How else could the first AQO arise?] Perhaps, since strictly speaking no objects, not even the objects we are considering classical, are classical, measurement is not a coincidence of positions but of nonspace 'positions' or states." Here Stein makes clear that even he does not understand measurement. That is why it is one of the two big quantum problems remaining: measurement and interpretation. His new ontology supposedly helps us with the latter. Stein's description of measurement helps the reviewer's own sense of measurement immensely, despite his object-based evolutionary approach. Strangely, he apparently self-contradicts, because throughout this work measurement presumes object precedence. As proponents of a MoQ-Quantonic reality, this is a core issue since we presume flux precedence. This largely differentiates Stein's ontology from ours. Also, we need to quote Stein here regarding NQO interaction as it relates to measurement, "The existence of interactions, from which we infer the fact that a measurement took place, perhaps need not arise only during classical measurements." In other words, perhaps energy-exchanging interactions may occur as a result of NQO measurements in nonspace. Intuitively this is more general and closer to reviewer's own perception of quantum reality. 3. using axioms 1 and 2 the reviewer extends Stein's remarks to: measurement is both AQOs and/or NQOs compelling NQOs to make spatial choices (Reviewer's note: reader, again, be aware that Stein is only discussing the becoming or actualization aspect of reality. We found no perspicuous text where he describes the inverse. Certainly that is part of reality too.) 4. measurement is not an interaction (In the reviewer's opinion, Stein's reason for this is unclear in light of his statements that interaction might occur in nonspace. In our own Quantonics, derived and extended from Pirsig's MoQ, measurement is a Quality Event which is a quantonic interrelationship—a Value interrelationship—it is the transformation of Dynamic Quality (nonspace) into Static Quality (space) while simultaneously DQ is co-within SQ! Stein's measurement act performs a SOM 'or' transformation on nonspace to create a new, separate/separable object in space with state and properties. Pirsig's Quality Events perform a 'both/and' emersion on DQ and create a new DQ 'both/and' SQ SPoV with contextual Value interrelationships (s) simultaneously among all other SPoVs, i.e., SPoVsSQDQ. In our own Quantonics, objectspossessing state and properties—do not change with QEs. Interrelationships change with QEs. In Quantonics, Pirsig's SPoVs are defined by their interrelationships vis-à-vis Stein's objects are defined by their properties and state. I.e., in Quantonics DQSQ. The are where the Value is, and where Reality is defined. The reviewer possesses little knowledge of field theory, but guesses fields are all about .) (Reviewer note: In Pirsig's MoQ, we see the profound replacement of objective states and properties with Quality or its synonym Value. Pirsig tells us that SPoVs in SQ do- may-not possess Value! SPoVs are Static Patterns of Value co-within Value—DQ! Our current context free mathematics appear incapable of depicting symbolically this co-within-it-ness Pirsig, Capra, Zukav, Zohar, etc., describe. It may, however, just be SOM's blinders imposing their restrictions on our interpretations of current mathematics.) 5. there are two kinds of measurement: coincidence without interaction and coincidence with interaction (see axioms 6 and 7) 6. coincidence without interaction changes the state of nonspace 7. coincidence with interaction changes the state of nonspace and exchanges energy via both nonspace and space (the energy exchange extension to both nonspace and space is from axiom 2 and Stein's interaction-qualifying quote) 8. measurement transformation of nonspace state eliminates the past of the AQOs and NQOs involved in the measurement 9. measurement transformation of nonspace state sets the initial conditions for the ensuing [NQO's] nonspace nonpreferential walk Interaction axioms 1. interaction may occur with measurement 2. interactions in space have bounded speed 3. interactions exchange energy (i.e., require the existence of mass) 4. a Stein Prologue parenthetical says "The further extension of the ontology into the nature of interactions, giving rise to the concept of field, is not done here—interaction indeed is a very difficult concept to understand." P. 5 (In the reviewer's opinion, this is another pinhole in Stein's ontological dike. Instead of insisting on an ontology of objects, Stein should be insisting on an ontology of quantum interactions (interrelationships) among quantons (i.e., quantum wave functions, quons, fluxons, etc.) Were he to do so carrying over his brilliant prescience from this work, we intuit, his ontology would be both more exegetic and exoteric. See our list of Issues below some of which discuss a few interaction-relevant difficulties.) Space axioms 1. points in space are not points in nonspace 2. wherever there is a point in space there is an AQO and vice versa 3. space is a property of AQOs (In Quantonics, we would say flux is an interrelationship among all QOs which may be interpreted infinitely in an infinity of local contexts. In the classical SOM reality most Western cultured Homo sapiens intuit—mass-energy, length, and time (real and imaginary) are a few of the interpretations of QO flux interrelationships.) 4. space is just a special kind of nonspace (see AQO axioms 9 and 10) (in Pirsig's MoQ we say a space is a latched emersion of nonspace) 5. classical law rules in space between measurements (see dual statement under nonspace) 6. space is the reservoir of all actualized possibilities (see dual statement under nonspace) Return to the Review Outline Summary and Exegesis: In this chapter, Stein succinctly summarizes his inspiring evolutionary process developed thus far. He lists twelve brief paragraphs which answer, to his level of acceptability, the question, "What is an object?" The first, fifth, and ninth items explicitly answer the question. The others support those three statements. I will quote three and you may read the rest in his book if you wish. Section 86, sub-paragraph a. "An object is exactly nonspace restricted by a given value of t0, the average "reversal" time in the nonspace." Section 86, sub-paragraph e. "An object therefore is all kinematical possibilities in nonspace and imaginary time, subject, however, to the restriction noted above [object mass determined by the average of the reversal times], which gives it the property of mass." Section 86, sub-paragraph i. "Thus, nonspace is the basic reality from which, by measurement, space, time, and classical objects arise." Finally, in this chapter, Stein implies he may have found a way to answer Dirac's concern about observables' dependence on Lorentzian reference frames. The reviewer thinks Stein achieves partial exegesis. But what he achieves is superb. We need more... Return to the Review Outline Ontology I:   (What Stein discovered in this work.) Stein discovered (some of the following appears self-contradictory—such is the nature of quantum reality viewed from a SOM mindset): Stein ends his chapter titled Ontology I with a goosebump-filled simile to Pirsig's MoQ. "I would call...[nonspace]...funda [Dynamic Quality] and...[space]...fundat [Static Quality]...Together, I would call them fundam [Quality]." Return to the Review Outline Ontology II: (Stein's tentative answer to, "What is Reality?") Stein wraps it all up. He tells us that reality is both actuality and potential. Again we see the metaphor to Pirsig. Measurements cause transformations to both potential and/or actuality. Potentiality distinguishes the future from the present. Real time in actuality requires measurement on potential which is change. Classical physics has no way to deal with the concept of change. (i.e., "...the classical object cannot give rise to a non-contradictory concept of change." p. 96. Another Pirsigean SOM-platypus? Appears related to Pirsig's causation platypus.) "The concept of change therefore makes sense only as a quantum mechanical concept." In a trite and off-handed manner, Stein addresses phenomena in the last full paragraph of the book. That left the reviewer feeling empty. Contrast that empty feeling with the fuller feeling one gets upon grasping Pirsig's MoQ. In the reviewer's opinion, Pirsig's MoQ subsumes most of Stein's new ontology. Also, in reviewer's opinion, MoQ is more exegetic and exoteric than Stein's new ontology. So you say, "OK Doug, Stein has told us what he thinks Reality is. What do you think Reality is?" Fair question. I have thought long and hard about this. I have read much on philosophy and science. Reality comes down to this for me, personally: Reality is both unlatched flux and latched flux and the transformations from one to the other. To me, Reality is flux-fluxing-flux. Flux is crux! J Thanks for asking! The Zen among you may say I have lost my Quality. Perhaps... Thanks for reading this far! That ends the review! Return to the Review Outline Next is a list of issues which arose during the review: The following text deals in some detail with issues the reviewer wants some of you to consider if you have time and interest. Let us know what you think. Issue - Reality must be defined by classical concepts: Issue - Interaction: Issue - Length vis-à-vis Flux: Issue - Imaginary time as a proxy for Casimir energy in nonspace?: Issue - Assumed unidirectional flow of creation: Issue - Gœdel—Does Stein understand Gœdelian consistency and completeness?: Issue - Missing terms in Stein's work—speculation on why?: Return to the Review Outline Issue - Reality must be defined by classical concepts: On pages 82-3 Stein tells us our reality is apparently classically objective. He tells us reality is measured with classical objects. He tells us all nonapparent reality must be defined by apparent classical concepts. We disagree, respectfully. Objective classical reality is a facade. It is a SOM facade. SOM is a facade! It is a tiny, partial, myopic, tunnel-visioned subset of reality. Before SOM, reality was not measured with classical objects. For over 100 centuries—before Parmenides, Plato, Aristotle, et al.—before the Birth of SOM just 25 centuries ago—sophists measured reality non-objectively, based on value, not objective properties. It is impossible for classical objects to arise without antecedent non-classical measurement. Sophism bore a child—SOM. Its SOM child committed parenticide! We have lived with SOM's ongoing violence for 25 centuries. It is time for a n¤vel quantum ontology parent to correct its child. In the reviewer's opinion, SOM reality is incapable of defining the nonap-parent via SOM's ap-parent. SOM cannot even define its own concepts of mass, space, time, and change, let alone define the nonap-parent. The only way to define/describe nonap-parent reality is to invent nonap-parent memes which may evolve to post-SOMitic ap-parency—a novel paradigm of thought. In your reviewer's opinion, that is what a n¤vel ontology must accomplish. Return to Issues List                                                                     Return to Review Outline Issue - Interaction: Stein does a brilliant and seminal job of deriving much of quantum theory using his objective approach. He arrives at his own quantum object ontology which not surprisingly is a phenomena-co-dependent-on-proemial-object dual of a particle-wave-based de facto and incomplete quantum ontology. In his book's last full paragraph, Stein tells us that phenomena only arise from object interactions. We see this as a major flaw in Stein's work. Why? We attribute it to his SOM bias. But let's find a source of his flaw. Stein is brilliant. He is clearly a genius. He is creative. He is efficient. He is productive. He is a consummate, multi-disciplined scientist. But he is just one of us, a Homo sapiens with finite intellect, and his fine sensitivities ken that, and ken that it applies to his audience too — to us. In a small book of only 100+ pages, how could Stein derive a new ontology and a dual quintessence of quantum mechanics from a concept of classical object? How could he derive a new "exegetic," or explanatory ontology that could be what he calls, "exoteric," or public, not just understandable and explainable among and by some scientific elite? (Reader, you see here eminent nobleness of Stein's endeavors on our behalf. We agree with his goal and importance of seeking a new ontology for physics.) To attain a truly exegetic, exoteric new ontology, he had to make some very basic assumptions or restrictions (using essentially classical concepts) for a development of his theory and its dual ontology. Those assumptions (see Stein's chapter, Classical Object) are a source of Stein's flaw we mentioned above. He assumes: 1. a classical point object as foundation 2. coincidence of point object and a point location in space-time 3. space exists (a concept associated with Stein's object) 4. time exists (a concept associated with Stein's object) 5. separately, space and time functionally define object (initially, for simplicity, prior to relativistic considerations) 6. object position is a function of time (initially, subsequently replaced by random walk) 7. an object's defining function of is analytic (initially, subsequently replaced by non-preferential random walk) 8. present defines future and past (initially, to satisfy Stein's classical object analyticity) 9. one dimension of space (for simplicity) 10. one object (for simplicity) 11. no interaction (for simplicity) 12. no (inertial) mass (initially, for simplicity; subsequently average step-length of random walk becomes proxy) 13. objects pre-exist (initially he assumes objects do not arise; subsequently he shows that quantum objects arise) OK, you say, "Where is that flaw?" It is in item 11 above, Stein's assumption of, "no interaction." (Stein tells us in his 10May2000 email that he did not say "interaction does not exist." We did not mean to imply that he said that. What we mean to imply is that his simplified model axiomatically disallows any interaction (again, appropriately and for model simplicity) among multiple objects or between two objects. Certainly, we mean no offense here! We are reviewing and stating our views and opinions!) For simplicity and explainability (exegesis and exotericus) his new ontology allows no interaction among objects. Yet he claims phenomena arise out of object interactions! There is that flaw! In other words, based on his cherished assumptions, his ontology cannot explain phenomena. Well, almost! Stein cannot resist and goes ahead and talks about interactions despite his assumptions and this parenthetical remark in his prologue, to which we think he should have paid heed: "(The further extension of the ontology into the nature of interactions, giving rise to the concept of field, is not done here—interaction indeed is a very difficult concept to understand.)" Page 5. Apparently, he thinks he must talk about interactions since his ontology would appear unfinished without describing mass. He tells us that a simple ontology demands non-interacting objects. But most of us SOMthink of objects classically in a Newtonian sense as substantial, having mass. Looking at reality through our SOM lenses, we see massive objects gravitationally affecting each other and even bumping into, bouncing off, or destroying each other. These behaviors are in the category Stein calls 'interactions.' Now remember, he assumes no interactions in his new ontology. Further, he tells us that whatever interactions there are occur only in space. But how did his actualized quantum objects AKA 'classical objects' get into space? How did they become massive? From whence their mass? Stein implies that interactions involve transfer of energy. Mass and energy are duals of one another in much the way space and time are duals of one another. Stein does not use 'energy' as a term when he speaks of nonspace and space. He claims that time is a proxy for mass in nonspace and length is a proxy for mass in space. He tells us massive objects arise in space from nonspace upon measurement. Allow your reviewer a luxury of equating energy (our newer term is isoflux.) with mass in nonspace. Thus, per Stein's new ontology, on measurement, energy/isoflux from nonspace exchanges or emerges into mass in space. If an interaction is an exchange of energy (transfer of mass if you insist), then is it clear that interaction occurs on certain kinds of measurement? All of this without even considering his surprising disclosure that interactions may occur in nonspace! Why would they not occur twixt space and nonspace too? Then, were that fact, would not phenomena arise from nonspace? From your reviewer's perspective, Stein's new ontology requires an assumption of interaction, especially if he insists that phenomena only arise from object interactions. Hmmm... Return to Issues List                                                                     Return to the Review Outline Issue - Length vis-à-vis Flux: In the reviewer's opinion, Stein makes a key object-based, object-biased assumption which is merely a matter of perspective. To show you what we mean, let's ask an elementary question: "Which is more fundamental to your perception of reality, length or flux?" Stein assumes former and denigrates the latter. Stein's most interesting maneuver emerges when his new ontology won't work unless he introduces dynamic change (flux?) to his sacred objective length. Fundamentally, we know the reciprocal relationship twixt wavelength and frequency. So we can infer that length is a proxy for frequency and vice versa. Given these remarks by Stein, "…[from some calculations based on a random walk] we infer … the de Broglie relationship. This is the source of the so-called "wave" nature of matter in quantum mechanics—and it is not even quantum mechanical! It should be noted that this "wave" nature applies to an ensemble of objects or, if it does apply to a single object, it applies only over many (an infinity of) instants of time. We conclude from this that the de Broglie relationship is not necessarily a quantum mechanical result, but is rather a consequence of a random walk distribution as presented in this chapter. [chap. V]" Page 53. The reviewer concludes: if the "wave" nature of matter is not quantum mechanical, then neither is the "length" nature of matter. Which is more fundamental to your perception of reality? Stein says "length," and we say flux. Remember how Einstein unified mass and energy in his famous equation? Are mass and energy more fundamental to your perception of reality than length? Is flux more fundamental to mass and energy? Or is length more fundamental to mass and energy? (Note that Einstein would probably agree with Stein. Einstein unified mass-energy, space-time, et al., but he failed, because of his own SOM bias, to unify particle-wave, and determinism-nondeterminism.) The reviewer assumes that we can heuristically depict legacy-classical-mechanics' three primal measurables, mass, length, and time, as consequences cum functions of flux, thus: m f(flux), (the de Broglie relation does this, and Stein uses length as a mass proxy)  l  f(flux), (this is simply wavelength)  t  f(flux). (per Stein this is a wavelength identity, also wave period is a time measure) Now ask yourself, "Which characteristic is more general among these three axioms?" Does length arise conceptually from flux or does flux arise conceptually from length? Clearly they are co-concepts, or may we offer in Niels Bohrese, complementary concepts? But if you have to choose one or the other as more fundamental, in the reviewer's opinion, you must choose flux. Why? The concept which Stein added to his random walk to make his approach work is 'change.' He had to add that concept! So change is not intrinsic to length, is it? Flux is change. Flux is fundamental concept, antecedent to all Homo sapiens' models of Reality. (Also, reader, note Stein's emphasis on classical physics' inability to represent the concept of change, i.e., flux, and the need for the concept of change to be coherent in his new ontology.) Return to Issues List                                                                     Return to the Review Outline Stein assumes there is no 'space' in nonspace, yet Dirac quantum objects take nonpreferential random paths in nonspace whose steps are of varying time reversals—time is imaginary in nonspace—there is no 'space' or 'time' in nonspace! In space Stein defines mass in terms of length. In nonspace Stein defines mass in terms of time, not length. In nonspace we have only imaginary time. Does Stein mean that imaginary time is a proxy for mass? Or might we infer that imaginary time is a proxy for Casimir energy in nonspace, and length is a proxy for mass in space? Return to Issues List                                                                     Return to the Review Outline Issue - Assumed unidirectional flow of creation: (i.e., Stein appears not to discuss discreation). The reviewer was left wanting for Stein's ontology's description of how 'objects' return to his nonspace. The reviewer has yet to find any of Stein's peers who ponder this issue. Many, as he, ponder creation. But apparently few consider discreation. (Damn those classical laws of thermodynamics! J) (Note: The reviewer, on reading Aristotle, found that he considered discreation, "And in general if a thing is perishing, there will be present something which exists;" See, Aristotle's Metaphysics, Book IV, Chapter 5, ~full paragraph 10.) Upon review of the small subset of ISMs attributed to SOM, one finds a predilection toward the fundamental Aristotelian ontology. Ontology in the Aristotelian philosophical system, is that branch of metaphysics which considers the nature of being. Among the ISMs, things appear to preexist, mostly. Little is said about becoming (to arise, to be created, to emerge—which Stein covers amply), and your reviewer found virtually nothing in ISMs on the concepts of discreation, de-emergence, devolution, etc. We would expand ontology to cover: becoming, being with change, being without change, and unbecoming. Stein certainly does not describe the latter in his text. One senses another limitation of the great SOM legacy here. I.e., once something becomes an object, it is always an object or an objective transformation of an object. In the reviewer's opinion, this is myopic. Our classical thermodynamic laws reflect this near-sightedness. (For example, read Mae-Wan Ho.) Stein's objects in space must have an ontological possibility for a return to nonspace. Return to Issues List                                                                     Return to the Review Outline Stein apparently uses the classical senses of completeness or incompleteness, consistency or inconsistency. Stein's appears to be a non-Gœdelian definition or interpretation of completeness/consistency. His use of the word probably assumes SOM's one global/absolute truth in one global context. If so, his ontology is probably at odds with more Gœdelian completeness/incompleteness interpretations of quantum mechanics. If so, it certainly is at odds with the multi- and omni-contextual aspects of quantum science's many isles of truth. Return to Issues List                                                                     Return to the Review Outline Issue - Missing terms in Stein's work—speculation on why?: (quantum coherence, complementarity, etc.) The reviewer thinks this has been one of the major problems in quantum science since its beginning late in the 19th century. Almost no one, because of legacy SOM, could understand Niels Bohr's 'complementarity.' His critics called it, "subjective," again, due to their engrained subject-object schism. In the reviewer's opinion, Stein is showing us some of his personal residuals from the great SOM legacy. In the reviewer's opinion Bohr's complementarity along with quantum coherency together solve the SOM paradox and concomitant denial of both separation and unification. SOM, due to its foundational axioms, has no way to grasp both subject and object unified, let alone both unified and distinct. Or as Mae-Wan Ho puts it in her, the Rainbow and the Worm, "A coherent state thus maximizes both global cohesion and also local freedom...a domain of coherent, autonomous activity." See pages 151 and 153. Pirsig, Ho, et al., geniuses in their own right have given us the great gift of a new meme: global coherence balanced with local autonomy/freedom. Pirsig: quanton(DQ,SQ), and Ho: quanton(global_cohesion,local_autonomy). Clearly, this complementary quantum coherence is a new meme that few have understood. Stein's goal of a new exoteric ontology may not be achieved, in the reviewer's opinion, without it. A great example is that we are incapable of describing how living, biological systems tap the non-dissipative, non-thermalized energy of Stein's nonspace without this new meme. Return to Issues List                                                                     Return to the Review Outline Next is some stuff which will be added to the review later: Issues which will be added to this review: Artwork which will be added to this review soon: Next is the end and review adieux: Return to the Review Outline End of Review: The last sentence in Stein's book says, "This work is now concluded." The reviewer respectfully disagrees. Stein shows us unambiguously that this work has only begun. Conclusion: Stein's ontology appears to be a subspecies of Pirsig's MoQ. Pirsig's MoQ contains a superior ontology as a new foundation of physics. Stein offers at least two levels of emergence: first, classical object emerging from nonspace, and second, phenomena emerging from classical-object-to-classical-object interactions. Where Stein's ontology finds phenomena distinct from objects, Pirsig's MoQ finds, everything that we know—a single class of Static Patterns of Value—composes the actual part of reality. Pirsig's SPoVs (Stein's classical objects) emerge from Dynamic Quality (Stein's nonspace) via Quality Events (Stein's measurements) to create Static Quality (Stein's space). The two descriptions are nearly the same. Stein focuses on SOM's legacy Aristotelian object. Pirsig focuses on unified SPoVs which fit the modern quantum wave science more closely in our opinion. Thanks for reading, and Many quantum truths to you, Doug Renselle. Return to the Review Outline ©Quantonics, Inc., 1998-2015 Rev. 3Jul2010  PDR — Created: 28Oct1998 PDR (1 of 2) (11May2000 rev - Started revision history list at bottom of this page.) (28Aug2007 rev - Reformat.) (3Jul2010 rev - Reset legacy markups. Replace some symbols with GIFs.) Return to Previous Page                                              Return to the Arches Additional Reviewer comments on the Quantum Schrödinger Object: The pre-quantum object is no longer analytic. All of time is required to specify its identity. The object is no longer the same from instant to instant. Another way to say that is, there is no analytic f(t) which specifies its precise location from instant to instant—from a given position x, pre-quantum object may move randomly + or - 'stepsize' to its next position in the random walk. Notice the classical 'or' in that last sentence. Using classical thinking, we cannot make Stein's random walk object (RWO) work, because the decisions at each step of the walk must be non-preferential. But SOMthink tells us to think preferentially, to think, 'or.' Classically, the step must be either + or -. Stein saw that the step had to be nonpreferential, therefore he concluded what is classically unreasonable: it must go both directions, both + and -, simultaneously. At this point in the evolution of our pre-quantum object, we discard classical object either/or ontology. For the reviewer this is an awesome place in the evolution of human thought. Right here! The beginnings of quantum enlightenment, right here! We begin the process of departure from SOMthink, and embark on a new Chautauqua of MoQthink, of Quantonic thinking. Stein describes the walk attributes of this new quantum object, i.e., the object's: Quickie summary to this point: Stein tells us on page 60 that this nonpreferential walk is ontological basis of quantum mechanics, vis-à-vis the space-time identity as the ontological basis of special relativity, and vis-à-vis the analytic f(t) as the ontological basis for classical mechanics. At this juncture Stein introduces the critical concept of quantum measurement (AKA special event, Quality Event). Permit the reviewer to oversimplify here for expediency. In quantum realm there are two divisions of reality, familiar to some of us by various synonymous pairs of bipolar appellations: Quantum measurement causes a quantum object to transition from the left division of reality to the right division of reality in each of the above pairs. (Reviewer's note: Few author's, apparently including Stein, describe transitions from the right division to the left division of reality. Your reviewer is an exception, however. See MoQ and Language on this site.) Using Stein's vocabulary, for this review, we adhere three terms: measurement, nonspace, and space. His quantum object then, when it is in nonspace is, "entirely at both locations," at each step of its walk. Upon measurement, though, and transition to space, the quantum object has a 50-50 probability of becoming real (actual) in one of the two locations of each step of walk. Here, by comparison to Pirsig's new philosophy, the MoQ, we see Stein's transition from nonspace to space as a precise dual of MoQ's creation/evolution of Static Pattern(s) of Value, via the Value interrelationship between dynamic nonspace and static space. This is more affirmation of MoQ as a quantum science parent philosophy. Stein puts much effort into the problematic issues of determinism and free will at this juncture. We leave the details for you to read, but will summarize by saying that Stein's nonspace is deterministic (all possible outcomes exist simultaneously and without preference, achieving nonspace analyticity between transitions to space), and the consequence of measurement, and subsequent quantum object (non-analytic) transition to space, is a probabilistic (Pirsigean Value) choice based upon both nonspace and space initial conditions... (Instead of: A causes B—Pirsig's SOM causation platypus, Stein's model correctly elicits: B values condition A—Pirsig's Value solution to the causation platypus. More affirmation of MoQ as a quantum science parent philosophy.) ...at the moment of measurement. To quote Stein directly, "...if a measurement...is made on the nonspace of object, then the result...is totally non-determined except for the restrictions of possibilities determined by the state of the system (its various nonspace 'positions') and the kind of measurement made." Page 60. Finally Stein tells us that the 'time' of a quantum object may only be determined by its space 'proxy.' At this stage in the evolution of our quantum object, Stein still views the 'proxy' of the quantum object as a classical object. That is, the quantum object is both classical while it is in space, and quantum while it is in nonspace. For the reviewer, this is confusing. He quiets our concerns by reminding us that this is difference twixt space and nonspace. Again, looking for affirmation of MoQ, we see 'time' as a Static Pattern of Value which demarcates the probabilistic Value creation/evolution nonspace-space transitions of Stein's non-relativistic quantum mechanical model. In the balance of this chapter, Stein goes on to derive both the Heisenberg uncertainty principle, and the Schrödinger equation. He concludes the chapter thus, "I conclude, therefore, that quantum mechanics, at least non-relativistic quantum mechanics, is the description of an object making a 'no-preference' walk in nonspace and imaginary time, while making only a single step between two (real) time instants. Determinism exists, but only in nonspace and only between two time instants. In fact, nothing at all has been assumed about the behavior of objects in nonspace and imaginary time; any property of object between two time instants...is a consequence of the nature of the object at (real) time instants." (our color and italic emphases) The reviewer finds Stein's classical use of the terms 'real' and 'imaginary' misleading in the quantum realm. (See Stein's 10May2000 email to Doug. Also see Doug's more recent Millennium III Map of Reality. Juxtapose it to our previous heuristic which includes Stein's 'space' and 'nonspace.' 20May2000 PDR) Reality is everything! To call part of reality 'real' misleads. From our perspective, Stein's word 'real' should be replaced by 'actual,' or 'known.' Reality then becomes a quantum combination of the known and what he calls the 'imaginary.' (I admit guilt at doing the same thing. This is legacy SOM imposing its facile will on weak.) Return to Schrödinger Object Review Text The Typical Path of a Quantum Object: We found this in, The Fractal Geometry of Nature, by Benoit B. Mandelbrot, W. H. Freeman & Company, 1983, p. 239: "This discussion can close by mentioning a new fractal wrinkle to the presentation of quantum mechanics. Feynman & Hibbs 1965 notes that typical path of a quantum mechanical particle is continuous and nondifferentiable, and many authors observe similarities between Brownian and quantum-mechanical motions (see, for example, Nelson 1966 and references herein). Inspired by these parallels and by my early Essays, Abbot & Wise 1980 shows that the observed path of a particle in quantum mechanics is a fractal curve with D=2. The analogy is interesting, at least pedagogically." We found this while researching our next review about, Buridan on Self Reference. We see and think Dr. Stein will see the connection here to his random walk in nonspace. For the interested reviewer's convenience, we list here the above mentioned references: FEYNMAN, R.P. & HIBBS, A. R. 1965. Quantum mechanics and path integrals. New York: McGraw-Hill. NELSON, E. 1966. Derivation of Schrödinger equation from Newtonian mechanics. Physical Review 150, 1079-1085. [Reviewer's note: This sounds very much like what Stein did in his book.] ABBOT, L.F. & WISE, M. B. 1981 [1980?], Dimension of a quantum-mechanical path. American J. of Physics 49, 37-39. Return to top of page To contact Quantonics write to or call: Doug Renselle Quantonics, Inc. Suite 18 #368 1950 East Greyhound Pass Carmel, INdiana 46033-7730 ©Quantonics, Inc., 1998-2015 Rev. 3Jul2010  PDR — Created: 28Oct1998  PDR (2 of 2) (11May2000 rev - Add parenthetical on 'identity' to Special Relativistic Object précis.) (11May2000 rev - Add ToC link to Reviewer Comments on Schrödinger Object. Revise those comments.) (20May2000 rev - Add reader note at page top. Add links to recent Stein material. Change apropos classical '=' to Quantonic equals.) (20May2000 rev - Add links to Stein's 10May2000 email.) (20May2000 rev - Add links to last two iterations of our evolving Map of Reality at end of Extended Schrödinger section.) (20May2000 rev - Removed some of Doug's thelogos from Interaction Issue above.) (20May2000 rev - Added link to our very recent Newton Connection under Interaction Issue above.) (1Jun2000 rev - Add anchor to Stein/Pirsig table of comparisons.) (18Jan2001 rev - Alter index table's colors.) (19Dec2001 rev - Add top of page frame-breaker.) (13Feb2002 rev - Change Planck rate from temporal to spatial reference.) (20May2002 rev - Add "Need for Special Vocabulary" link to first issue in list of issues. Remediate 1st issue, and add clarifying links.) (28Jan2003 rev - Add Buridan's Ass link under Quantum Schrödinger Object.) (22Feb2003 rev - Update return links at page top and bottom.) (16Nov2003 rev - Reset color text to save bytes. Add anchor for October, 2003 Flash.) (21-22Feb2004 rev - Add pop-up menus to several occurrences of 'the' thelogos to emphasize overuse of 'the.') (14Nov2004 rev - Update 'nonspace' analogies list with Polanyi.) (21Mar2005 rev - Add 'white hole' to list of metaphors of Stein's use of terms.) (9Jan2007 rev - Add comments to 'Nonspace and Measurement' axioms re Stein's unwitting specification of a hologram.) (28Aug2007 rev - Reformat. Massive respell. Reset legacy red markups.) Return to Recommended Reading Page                                              Arches
7ccbd208d1c900ce
path integral Quantum field theory physics, mathematical physics, philosophy of physics Surveys, textbooks and lecture notes theory (physics), model (physics) experiment, measurement, computable physics Measure and probability theory Integration theory under construction The notion of path integral originates in and is mainly used in the context of quantum mechanics and quantum field theory, where it is a certain operation supposed to model the notion of quantization. The idea is that the quantum propagator – in FQFT the value of the functor U:CobVectU : Cob \to Vect on a certain cobordism – is given by an integral kernel U:ψK(,y)ψ(y)dμU : \psi \mapsto \int K(-,y) \psi(y) d\mu where K(x,y)K(x,y) is something like the integral of the exponentiated action functional SS over all field configurations ϕ\phi with prescribed boundary datat xx and yy. Formally one writes K(x,y)=exp(iS(ϕ))Dϕ K(x,y) = \int \exp(i S(\phi))\; D\phi and calls this the path integral. Here the expression DϕD \phi is supposed to allude to an measure integral on the space of all ϕ\phi. The main problem with the path integral idea is that it is typically unclear what this measure should be, or, worse, it is typically clear that no suitable such measure does exist. The name path integral originates from the special case where the system is the sigma model describing a particle on a target space manifold XX. In this case a field configuration ϕ\phi is a path ϕ:[0,1]X\phi : [0,1] \to X in XX, hence the integral over all field configurations is an integral over all paths. The idea of the path integral famously goes back to Richard Feynman, who motivated the idea in quantum mechanics. In that context the notion can typically be made precise and shown to be equivalent to various other quantization prescriptions. The central impact of the idea of the path integral however is in its application to quantum field theory, where it is often taken in the physics literatire as the definition of what the quantum field theory encoded by an action functional should be, disregarding the fact that in these contexts it is typically quite unclear what the path integral actually means, precisely. Notably the Feynman perturbation series summing over Feynman graphs is motivated as one way to make sense of the path integral in quantum field theory and in practice usually serves as a definition of the perturbative path integral. We start with stating the elementary description of the Feynman-Kac formula? as traditional in physics textbooks in Then we indicate the more abstract formulation of this in terms of integration against the Wiener measure on the space of paths (for the Euclidean path integral) in Then we indicate a formulation in perturbation theory and BV-formalism in Elementary description in quantum mechanics A simple form of the path integral is realized in quantum mechanics, where it was originally dreamed up by Richard Feynman and then made precise using the Feynman-Kac formula?. (Most calculations in practice are still done using perturbation theory, see the section Perturbatively in BV-formalism below). The Schrödinger equation says that the rate at which the phase of an energy eigenvector rotates is proportional to its energy: (1)iddtψ=Hψ. i \hbar \frac{d}{dt} \psi = H \psi. Therefore, the probability that the system evolves to the final state ψ F\psi_F after evolving for time tt from the initial state ψ I\psi_I is (2)ψ F|e iHt|ψ I. \langle \psi_F|e^{-iHt}|\psi_I\rangle. Chop this up into time steps Δt=t/N\Delta t = t/N and use the fact that (3) |qq|=1\int_{-\infty}^{\infty}|q\rangle\langle q| = 1 to get (4)ψ F|e iHΔt( |q N1q N1|dq N1)e iHΔt( |q N2q N2|dq N2)e iHΔte iHΔt( |q 1q 1|dq 1)e iHΔt|ψ I \langle \psi_F| e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_{N-1} \rangle \langle q_{N-1}| dq_{N-1}\right) e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_{N-2} \rangle \langle q_{N-2}| dq_{N-2}\right) e^{-iH\Delta t} \cdots e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_1 \rangle \langle q_1| dq_1\right) e^{-iH\Delta t} |\psi_I\rangle (5)= q 1 q N2 q N1ψ F|e iHΔt|q N1q N1|e iHΔt|q N2q N2|e iHΔte iHΔt|q 1q 1|e iHΔt|ψ Idq N1dq N2dq 1 = \int_{q_1} \cdots \int_{q_{N-2}} \int_{q_{N-1}} \langle \psi_F| e^{-iH\Delta t} |q_{N-1} \rangle \langle q_{N-1}| e^{-iH\Delta t} |q_{N-2} \rangle \langle q_{N-2}| e^{-iH\Delta t} \cdots e^{-iH\Delta t} |q_1 \rangle \langle q_1| e^{-iH\Delta t} |\psi_I\rangle dq_{N-1} dq_{N-2} \cdots dq_1 Assume we have the free Hamiltonian H=p 2/2m.H=p^2/2m. Looking at an individual term q n+1|e iHΔt|q n,\langle q_{n+1}| e^{-iH\Delta t} |q_{n} \rangle, we can insert a factor of 1 and solve to get (6)q n+1|e iHΔt( dp2π|pp|)|q n = dp2πe ip 2Δt/2mq n+1|pp|q n = dp2πe ip 2Δt/2me ip(q n+1q n) = (i2πmΔt) 12e iΔt(m/2)[(q n+1q n)/Δt] 2. \array{\langle q_{n+1}| e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} \frac{dp}{2\pi}|p\rangle \langle p|\right)|q_{n} \rangle &=& \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-ip^2\Delta t/2m} \langle q_{n+1}|p\rangle \langle p|q_{n} \rangle \\ &=& \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-ip^2\Delta t/2m} e^{ip(q_{n+1}-q_n)} \\ &=& \left(\frac{-i 2\pi m}{\Delta t}\right)^{\frac{1}{2}} e^{i \Delta t (m/2)[(q_{n+1}-q_n)/\Delta t]^2}.} (7)Dq=lim N(i2πmΔt) N2 n=0 N1dq n,\int Dq = \lim_{N \to \infty} \left(\frac{-i 2\pi m}{\Delta t}\right)^{\frac{N}{2}} \prod_{n=0}^{N-1} \int dq_n, and letting Δt0,N,\Delta t \to 0, N \to \infty, we get (8)ψ F|e iHt|ψ I=Dqe i 0 tdt12mq˙ 2. \langle \psi_F|e^{-iHt}|\psi_I\rangle = \int Dq e^{i \int_0^t dt \frac{1}{2}m \dot{q}^2}. For arbitrary Hamiltonians H=p 22m+V(x),H = \frac{p^2}{2m} + V(x), we get (9)ψ F|e iHt|ψ I = Dqe i 0 tdt12mq˙ 2V(x) = Dqe i 0 t(q˙,q)dt = Dqe iS(q), \array{\langle \psi_F|e^{-iHt}|\psi_I\rangle &=& \int Dq e^{i \int_0^t dt \frac{1}{2}m \dot{q}^2 - V(x)} \\ &=& \int Dq e^{i\int_0^t\mathcal{L}(\dot{q},q) dt} \\ &=& \int Dq e^{iS(q)}, } where S(q)S(q) is the action functional. Is there an easy way to see how the Hamiltonian transforms into the Lagrangian in the exponent? As an integral against the Wiener measure More abstractly, the Euclidean path integral for the quantum mechanics of a charged particle may be defined by integration the gauge-coupling action again the Wiener measure on the space of paths. Consider a Riemannian manifold (X,g)(X,g) – hence a background field of gravity – and a connection :XBU(1) conn\nabla : X \to \mathbf{B}U(1)_{conn} – hence an electromagnetic background gauge field. The gauge-coupling interaction term is given by the parallel transport of this connection exp(iS)exp(2πi ()[(),]):[I,X] x 0,x 1Hom(E x 0,E x 1), \exp(i S) \coloneqq \exp(2\pi i \int_{(-)} [(-),\nabla] ) \colon [I, X]_{x_0,x_1} \to Hom(E_{x_0}, E_{x_1}) \,, where EXE \to X is the complex line bundle which is associated to \nabla. The Wiener measure dμ Wd\mu_W on the space of stochastic paths in XX,we may write suggestively write as dμ W=[exp(S kin)Dγ] d\mu_W = [\exp(-S_{kin})D\gamma] for it combines what in the physics literature is the kinetic action and a canonical measure on paths. (This is a general phenomenon in formalizations of the process of quantization: the kinetic action (the free field theory-part of the action functional) is absorbed as part of the integration measure against with the remaining interaction terms are integrated. ) Then one has (e.g. Norris92, theorem (34), Charles 99, theorem 6.1): the integral kernel for the time evolution propagator is U(x 0,x 1)= γtra()(γ)[exp(S kin(γ))Dγ], U(x_0,x_1) = \int_{\gamma} tra(\nabla)(\gamma) \, [\exp(-S_{kin}(\gamma)) D\gamma] \,, hence the integration of the parallel transport/holonomy against the Wiener measure. (To make sense of this one first needs to extend the parallel transport from smooth paths to stochastic paths, see the references below.) This “holonomy integrated against the Wiener measure” is the path integral in the form in which it notably appears in the worldline formalism for computing scattering amplitudes in quantum field theory. See (Strassler 92, (2.9), (2.10)). Notice in particular that by the discussion there this is the correct Wick rotated form: the kinetic action is not a complex phase but a real exponential exp(S kin)\exp(- S_{kin}) while the gauge interaction term (the holonomy) is a complex phase (locally exp(i γA)\exp(i \int_\gamma A)). From the point of view of higher prequantum field theory this means that the path integral sends a correspondence in the slice (infinity,1)-topos of smooth infinity-groupoids over the delooping groupoid BU(1)\mathbf{B}U(1) [I,X] ()| 0 ()| 1 X exp(iS) X χ() χ() BU(1) \array{ && [I,X] \\ & {}^{(-)|_0}\swarrow && \searrow^{(-)|_1} \\ X && \swArrow_{\exp(i S)} && X \\ & {}_{\mathllap{\chi(\nabla)}}\searrow && \swarrow_{\mathrlap{\chi(\nabla)}} \\ && \mathbf{B}U(1) } (essentially a prequantized Lagrangian correspondence) to another correspondence, now in the slice over the stack (now an actual 2-sheaf) Mod\mathbb{C}\mathbf{Mod} of modules over the complex numbers, hence of complex vector bundles: X×X p 1 p 2 X γexp(iS(γ))[exp(S kin(γ))Dγ] X ρ(χ()) ρ(χ()) Mod. \array{ && X \times X \\ & {}^{p_1}\swarrow && \searrow^{p_2} \\ X && \swArrow_{\int_{\gamma}\exp(i S(\gamma)) [\exp(-S_{kin}(\gamma))D\gamma]} && X \\ & {}_{\mathllap{\rho(\chi(\nabla))}}\searrow && \swarrow_{\mathrlap{\rho(\chi(\nabla))}} \\ && \mathbb{C}\mathbf{Mod} \,. } For more discussion along these lines see at motivic quantization. Perturbatively for free field theory in BV-formalism BV-BRST formalism is a means to formalize the path integral in perturbation theory as the passage to cochain cohomology in a quantum BV-complex. See at The BV-complex and homological integration for more details. action functionalkinetic actioninteractionpath integral measure BV differentialelliptic complex +antibracket with interaction +BV-Laplacian The path integral in the bigger picture Ours is the age whose central fundamental theoretical physics question is: What is quantum field theory? A closely related question is: What is the path integral ? After its conception by Richard Feynman in the middle of the 20th century It was notably Edward Witten’s achievement in the late 20th century to make clear the vast potential for fundamental physics and pure math underlying the concept of the quantum field theoretic path integral. And yet, among all the aspects of QFT, the notion of the path integral is the one that has resisted attempts at formalization the most. While functorial quantum field theory is the formalization of the properties that the locality and the sewing law of the path integral is demanded to have – whatever the path integral is, it is a process that in the end yields a functor on a (infinity,n)-category of cobordisms – by itself, this sheds no light on what that procedure called “path integration” or “path integral quantization” is. The single major insight into the right higher categorical formalization of the path integral is probably the idea indicated in which says that • it is wrong to think of the action functional that the path integral integrates over as just a function: it is a higher categorical object; • accordingly, the path integral is not something that just controls the numbers or linear maps assigned by a dd-dimensional quantum field theory in dimension dd: also the assignment to higher codimensions is to be regarded as part of the path integral; • notably: the fact that quantum mechanics assigns a (Hilbert) space of sections of a vector bundle to codimension 1 is to be regarded as due to a summing operation in the sense of the path integral, too: the space of sections of a vector bundle is the continuum equivalent of the direct sum of its fibers More recently, one sees attempts to formalize this observation of Freed’s, notably in the context of the cobordism hypothesis: based on material (on categories of “families”) in On the Classification of Topological Field Theories . The original textbook reference is • Richard Feynman, A. R. Hibbs, , Quantum Mechanics and Path Integrals , New York: McGraw-Hill, (1965) Lecture notes include Textbook accounts include • G. Johnson, M. Lapidus, The Feynman integral and Feynman’s operational calculus, Oxford University Press, Oxford, 2000. • Barry Simon, Functional integration and quantum physics AMS Chelsea Publ., Providence, 2005 • Joseph Polchinski, String theory, part I, appendix A Discussion in constructive quantum field theory includes • James Glimm, Arthur Jaffe, Quantum physics -- A functional integral point of view, 535 pages, Springer • Simon, Functional Integration in Quantum Physics (AMS, 2005) • Sergio Albeverio, Raphael Høegh-Krohn, Sonia Mazzucchi. Mathematical theory of Feynman path integrals - An Introduction, 2 nd corrected and enlarged edition, Lecture Notes in Mathematics, Vol. 523. Springer, Berlin, 2008 (ZMATH) • Sonia Mazzucchi, Mathematical Feynman Path Integrals and Their Applications, World Scientific, Singapore, 2009. The worldline path integral as a way to compute scattering amplitudes in QFT was understood in Stochastic integration theory The following articles use the integration over Wiener measures on stochastic processes? for formalizing the path ingegral. • James Norris, A complete differential formalism for stochastic calculus in manifolds, Séminaire de probabilités de Strasbourg, 26 (1992), p. 189-209 (NUMDAM) • Vassili Kolokoltsov, Path integration: connecting pure jump and Wiener processes (pdf) • Bruce Driver, Anton Thalmaier, Heat equation derivative formulas for vector bundles, Journal of Functional Analysis 183, 42-108 (2001) (pdf) For charged particle/path integral of holonomy functional The following articles discuss (aspects of) the path integral for the charged particle coupled to a background gauge field, in which case the path integral is essentially the integration of the holonomy/parallel transport functional against the Wiener measure. • Marc Arnaudon and Anton Thalmaier, Yang–Mills fields and random holonomy along Brownian bridges, Ann. Probab. Volume 31, Number 2 (2003), 769-790. (Euclid) • Mikhail Kapranov, Noncommutative geometry and path integrals, in Algebra, Arithmetic and Geometry, Birkhäuser Progress in Mathematics 27 (2009) (arXiv:math/0612411) • Christian Bär, Frank Pfäffle, Path integrals on manifolds by finite dimensional approximation, J. reine angew. Math., (2008), 625: 29-57. (arXiv:math.AP/0703272) • Dana Fine, Stephen Sawin, A Rigorous Path Integral for Supersymmetric Quantum Mechanics and the Heat Kernel (arXiv:0705.0638) A discussion for phase spaces equipped with a Kähler polarization and a prequantum line bundle is in • Laurent Charles, Feynman path integral and Toeplitz Quantization, Helv. Phys. Acta 72 (1999) 341., (pdf) following Norris 92, theorem (34). Other references on mathematical aspects of path integrals include Detailed rigorous discussion for quadratic Hamiltonians and for phase space paths in in Discussion of quantization of Chern-Simons theory via a Wiener measure is in • Adrian P. C. Lim, Chern-Simons Path Integral on 3\mathbb{R}^3 using Abstract Wiener Measure (pdf) Lecture notes on quantum field theory, emphasizing mathematics of the Euclidean path integrals and the relation to statistical physics are at MathOverflow questions: mathematics-of-path-integral-state-of-the-art,path-integrals-outside-qft, doing-geometry-using-feynman-path-integral, path-integrals-localisation, finite-dimensional-feynman-integrals, the-mathematical-theory-of-feynman-integrals • Theo Johnson-Freyd, The formal path integral and quantum mechanics, J. Math. Phys. 51, 122103 (2010) arxiv/1004.4305, doi; On the coordinate (in)dependence of the formal path integral, arxiv/1003.5730 Revised on August 22, 2015 01:54:52 by Urs Schreiber (
83feac31c4242137
Anticipation – A Spooky Computation Conference on Computing Anticipatory Systems (CASYS 99), Liege, Belgium, August 8-11, 1999 Mihai Nadin Program in Computational Design University of Wuppertal Computer Science, Center for the Study of Language and Information 201 Cordura Hall Stanford University Robert Rosen, in memoriam As the subject of anticipation claims its legitimate place in current scientific and technological inquiry, researchers from various disciplines (e.g., computation, artificial intelligence, biology, logic, art theory) make headway in a territory of unusual aspects of knowledge and epistemology. Under the heading anticipation, we encounter subjects such as preventive caching, robotics, advanced research in biology (defining the living) and medicine (especially genetically transmitted disease), along with fascinating studies in art (music, in particular). These make up a broad variety of fundamental and applied research focused on a controversial concept. Inspired by none other than Einstein–he referred to spooky actions at distance, i.e., what became known as quantum non-locality–the title of the paper is meant to submit my hypothesis that such processes are related to quantum non-locality. The second goal of this paper is to offer a cognitive framework–based on my early work on mind processes (1988)–within which the variety of anticipatory horizons invoked today finds a grounding that is both scientifically relevant and epistemologically coherent. The third goal of this paper is to identify the broad conceptual categories under which we can identify progress made so far and possible directions to follow. The fourth and final goal is to submit a co-relation view of anticipation and to integrate the inclusive recursion in a logic of relations that handles co-relations. Keywords: auto-suggestive memory, co-relation, non-locality, quantum semiotics, self-constitution, interactive computation 1 Introduction Anticipation could become the new frontier in science. Trends, scientific fashions, and priority funding programs succeed one another rapidly in a society that experiences a dynamics of change reflected in ever shorter cycles of discovery, production, and consumption. Frontiers mark stark discontinuities that ascertain fundamentally new knowledge horizons. Einstein stated, “No problem can be solved from the same consciousness that created it. We must learn to see the world anew.” It is in this respect that I find it extremely important to begin by putting the entire effort into a broad perspective. 2 The Philosophic Foundation of Anticipation is Not Trivial Philosophical considerations cannot be avoided (provided that they are not pursued as a means in themselves). Robert Rosen (1985) quoted David Hawkins, “Philosophy may be ignored but not escaped.” Rosen, whose work deserves to be integrated in current scientific dialog more than was been the case until his untimely death, understood this thought very well. Anticipation bears a heavy burden of interpretations. As initial attempts (Rosen, 1985; Nadin, 1988; Dubois, 1992) to recover the concept and to give it a scientific foundation prove, the task is difficult. We face here the dominant deterministic view inspired by a model of the universe in which a net distinction between cause and effect can be made. We also face a reductionist understanding of the world, which claims that physics is paradigmatic for everything else. Moreover, we are captive to an understanding of time and space that corresponds to the mathematical descriptions of the physical world: Time is uniquely defined along the arrow from past to future; space is homogeneous. Finally, we are given to the hope that science leads to laws on whose basis we may make accurate predictions. Once we accept these laws, anticipation can at best be accepted as one of these predictions, but not as a scientific endeavor on its own terms. A clear image of the difficulties in establishing this foundation results from revisiting Rosen’s work on anticipatory systems, above all his fundamental work, Life Itself (1991). Indeed, his rigorous argumentation, based on solid mathematical work and on a grounding in biology second to none among his peers, makes sense only against the background of the philosophic considerations set forth in his writings. It might not matter to a programmer whether Aristotle’s causa finalis (final cause) can be ascertained or justified, or deemed as passé and unacceptable. A programmer’s philosophy does not directly affect lines of code; neither do disputes among those partial to a certain world view. What is affected is the general perspective, i.e., the understanding of a program’s meaning. If the program displays characteristics of anticipation, the philosophic grounding might affect the realization that within a given condition–such as embodied in a machine–the simulation of anticipatory features should not be construed as anticipation per se. The philosophic foundation is also a prerequisite for defining how far the field can be extended without ending up in a different cognitive realm. Regarding this aspect, it is better to let those trying to expand the inquiry of anticipation–let me mention again Dubois (since 1996) and the notions of incursion and hyperincursion, Holmberg (since 1997) and space aspects–express themselves on the matter. Van de Vijver (1997), among few others (cf. CASYS 98 and the contributions listed in the Program for CASYS 99) has already attempted to shed light on what seems philosophically pertinent to the subject. She is right in stating that the global/local relation more adequately pertains to anticipation than does the pair particular/universal. The practical implications of this observation have not yet been defined. From my own perspective–based on pragmatics, which means grounding in the practical experience through which humans become what they are–anticipation corresponds to a characteristic of live beings as they attain the condition at which they constitutes their own nature. At this level, predictive models of themselves become possible, and progressively necessary. The thematization of anticipation, which as far as we know is a human being’s expression of self-awareness and connectedness, is only one aspect of this stage in the unfolding of our species. According to the premise of this perspective, pragmatics–expressed in what we do and how and why we do what we do–is where our understanding of anticipation originates. This is also where it returns, in the form of optimizing our actions, including those of defining what these actions should be, what sequence they follow, and how we evaluate them. All these are projections against a future towards which each of us is moving, all tainted by some form of finality (telos), or at least by its less disputed relative called intentionality. The generic why of our existence is embedded in this intentionality. The source of this finality are the others, those we interact with either in cooperating or in competing, or in a sense of belonging, which over time allowed for the constitution of the identity called humanness. Gordon Pask (1980), the almost legendary cybernetician, called such an entity a cognitive system. 2.1 Self-Entailment and Anticipation In a dialog on entailment (cf. http://views.vcu.edu/complex)–a fundamental concept in Rosen’s explanation of anticipation–a line originating with François Jacob was dropped: “Theories come and go, the frog stays.” (Incidentally, Jacob is the author of The Logic of Life, Princeton University Press, 1993). This brings us back to a question formulated above: Does it matter to a programmer (the reader may substitute his/her profession for the word programmer) that anticipation is based on the self-entailment characteristic of the living? Or that evolution is the source of entailment? If we compare the various types of computation acknowledged since people started building computers and writing software programs, we find that during the syntactically driven initial phases, such considerations actually could not affect the pragmatics of programming. Only relatively recently has a rudimentary semantic dimension been added to computation. In the final analysis, it does not matter which microelectronics, computer architecture, programming languages, operating systems, networks, or communication protocols are used. For all practical purposes, what matters is that between the world and the computation pertinent to some aspects of this world, the relations are still extremely limited. If a programmer is not just in the business of writing lines of code for a specific application that might improve through a syntactically supported emulation of anticipatory characteristics–think about macros that save typing time by “guessing” which word or expression a user started to type in and “filling in” the letters or words–then it matters that there is something like self-entailment. It matters, too, that the notion of self-entailment supports more adequate explanations of biological processes than any other concept of the physical sciences. On a semantic level, the awareness of self-entailment (through self-associative memory) leads to better solutions in speech and handwriting recognition. However, once the pragmatic level is reached–we are still far from this–understanding the philosophic implications of the nature and condition of anticipation becomes crucial. The reason is that it is not at all clear that characteristics of the living–self-repair, metabolism, and anticipation–can be effectively embodied in machines. This is why the notion of frontier science was mentioned in the Introduction. The frontier is that of conceiving and implementing life-like systems. Whether Rosen’s (M, R)-model, defined by metabolism and repair, or others, such as those advanced in neural networks, evolutionary computation, and ALife, will qualify as necessary and sufficient for making anticipation possible outside the realm of the living remains to be seen. I (Nadin, 1988, 1991) argue for computers with a variable configuration based on anticipatory procedures. This model is inspired by the dynamics of the constitution and interaction of minds, but does not suggest an imitation of such processes. The issue is not, however, reducible to the means (digital computation, algorithmic, non-algorithmic, or heterogenous processing, signal processing, quantum computation, etc.), but to the encompassing goal. 2.2 Specializations To nobody’s surprise, anticipation, in some form or another, is part of the research program of logic, cognitive science, computer science, robotics, networking, molecular biology, genetics, medicine, art and design, nanotechnology, the mathematics of dynamic systems, and what has become known as ALife, i.e., the field of inquiry into artificial life. Anticipation involves semiotic notions, as it involves a deep understanding of complexity, or, better yet, of an improved understanding of complexity that integrates quantitative and qualitative aspects. It is not at all clear that full-fledged anticipation, in the form of machine-supported anticipatory functioning, is a goal within the reach of the species through whose cognitive characteristics it came into being and who became aware of it. Machines, or computations, for those who focus on the various data processing machines, able to anticipate earthquakes, hurricanes, aesthetic satisfaction, disease, financial market performance, lottery drawings, military actions, scientific breakthroughs, social unrest, irrational human behavior, etc., could well claim total control of our universe of existence. Indeed, to correctly anticipate is to be in control. This rather simplistic image of machines or computations able to anticipate cannot be disregarded or relegated to science fiction. Cloning is here to stay; so are many techniques embodying the once disreputed causa finalis. A philosophic foundation of anticipation has to entertain the many questions and aspects that pertain to the basic assertion according to which anticipation reflects part of our cognitive make-up, moreover, constitutes its foundation. Even if Kuhn’s model of scientific paradigm change had not been abused to the extent of its trivialization, I would avoid the suggestion that anticipation is a new paradigm. Rather, as a frontier in science, it transcends its many specializations as it establishes the requirement for a different way of thinking, a fundamentally different epistemological foundation. 3 Pro-Action vs. Re-Action Now that the epistemological requirement of a different way of thinking has been brought up, I would like to revisit work done during the years when the very subject of anticipation seemed not to exist (except in the title of Rosen’s book). My claim in 1988 (on the occasion of a lecture presented at Ohio State University) was that anticipation lies at the foundation of the entire cognitive activity of the human being. Moreover, through anticipation, we humans gain insight into what keeps our world together as a coherent whole whose future states stand in correlation to the present state as minds grasp it. Minds exist only in relation to other minds; they are instantiations of co-relations. This is also the main thesis of this paper. For over 300 years–since Descartes’ major elaborations (1637, 1644) and Newton’s Principia (1687)–science has advanced in understanding what for all practical purposes came to be known as the reactive modality. Causality is experienced in the reactive model of the universe, to the detriment of any pro-active manifestations of phenomena not reducible to the cause-and-effect chain or describable in the vocabulary of determinism. It is important to understand that what is at issue here is not some silly semantic game, but rather a pragmatic horizon: Are human actions (through which individuals and groups identify themselves, i.e., self-constitute, Nadin 1997) in reaction to something assumed as given, or are human actions in anticipation of something that can be described as a goal, ideal, or value? But even in this formulation (in which the vocabulary is as far as it can be from the vitalistic notions to which Descartes, Newton, and many others reacted), the suspicion of teleological dynamics–is there a given goal or direction, a final vector?–is not erased. Despite progress made in the last 30 years in understanding dynamic systems, it is still difficult to accept the connection between goal and self-organization, between ideal, or value, and emergent properties. 3.1 Minds Are Anticipations The mind is in anticipation of events, that is, ahead of them–this was my main thesis over ten years ago. Advanced research (Libet 1985, 1989) on the so-called “readiness potential” supported this statement. In recent years, work on the “wet brain” as well as work supported by MR-based visualization technologies have fully confirmed this understanding. Having entered the difficult dialog on the nature of cognitive processes from a perspective that no longer accepted the exclusive premise of representation –another heritage from Descartes–I had to examine how processes of self-constitution eventually result in shared knowledge without the assumption of a homunculus. What seemed inexplicable from a perspective of classical or relativist physics–a vast amount of actions that seemed instantaneous, in the absence of a better explanation for their connectedness–was coming into focus as constitutive of the human mind. Anticipatory cognitive and motoric scripts, from which in a given context one or another is instantiated, were advanced at that time as a possible description for how, from among many pro-active possible courses of action, one would be realized. Today I would call those possible scripts models and insist that a coherent description of the functioning of the mind is based on the assumption that there are many such models. Additionally, I would add that learning, in its many realizations, is to be understood as an important form of stimulating the generation of models, and of stimulating a competitive relation among them. [Von Foerster (1999) entertains a motto on his e-mail address that is an encapsulation of what I just described: "Act always as to increase the number of choices."] In a subtle way, defense mechanisms–from blinking to reflexes of all types–belong to this family. Anticipatory nausea and vomiting (whether on a ship or related to chemotherapy) is another example. The phantom limb phenomenon (sensation in the area of an amputated limb) is mirrored by pain or discomfort before something could have actually caused them. There is a descriptive instance in Lewis Carroll’s Through the Looking Glass. Before accidentally pricking her finger, the White Queen cries: “I haven’t pricked it yet, but I soon shall.” She lives life in reverse, which is what anticipation ultimately affords–provided that the interpretation process is triggered and made part of the self-constitutive pragmatics. 3.1.1 Anticipation is Distributed As recently as this year, results in the study of the anticipation of moving stimuli by the retina (Berry, et al 1999) made it clear that anticipation is distributed. The research proved that anticipation of moving stimuli begins in the retina. It is no longer that we expect the visual cortex to do some heavy extrapolation of trajectory (this was the predominant model until recently) but that we know that retinal processing is pro-active. Even if pro-activity is not equally distributed along all sensory channels–some are slower in anticipating than others, not the least because sound travels at a slower speed than light does, for example–it defines a characteristic of human perception and sheds new light on motoric activity. 3.1.2 Knowledge as Construction But there is also Kelly’s (1995) constructivist position, which must be acknowledged by researchers in the psychological foundation of anticipation. The adequacy of our constructs is, in his view, their predictive utility. Coherence is gained as we improve our capacity to anticipate events. Knowledge is constructed; validated anticipations enhance cognitive confidence and make further constructs possible. In Kelly’s terms, human anticipation originates in the psychological realm (the mind) and reflects the intention to make possible a correspondence between a future experience and certain of our anticipations (Kelly, 1955; Mancuso & Adams-Weber, 1982). Since states of mind somehow represent states of the world, adequacy of anticipations remains a matter of the test of experience. The basic function of all our representations, as the “fundamental postulate” ascertains, is anticipation (a temporal projection). Alternative courses of action in respect to their anticipated consequences represent the pragmatic dimension of this view. Observed phenomena and their descriptions are not independent of the assumptions we make. This applies to the perceptual control theory, as it applies to Kelly’s perspective and to any other theory. Moreover, assumptions facilitate or hinder new observations. For those who adopted the view according to which a future state cannot affect a present state, anticipation makes no sense, regardless of whether one points to the subject in various religious schemes, in biology, or in the quantum realm. The situation is not unlike that of Euclidean geometry vs. non-Euclidean geometries. To see the world anew is not an easy task! Anticipation of moving stimuli, to get back to the discovery mentioned above, is recorded in the form of spike trains of many ganglion cells in the retina. It follows from known mechanisms of retinal processing; in particular, the contrast-gain control mechanism suggests that there will be limits to what kinds of stimuli can be anticipated. Researchers report that variations of speed, for instance, are important; variations of direction are not. Furthermore, since space-based anticipation and time-based anticipation have a different metric, it remains to be seen whether a dominance of one mode over the other is established. As we know, in many cases the meeting between a visual map (projection of the retina to the tectum) and an auditory map takes place in a process called binding. How the two maps are eventually aligned is far from being a matter of semantics (or terminology, if you wish). Synchronization mechanisms, of a nature we cannot yet define, play an important role here. Obviously, this is not control of imagination, even if those pushing such terms feel more forceful in the de facto rejection of anticipation. Arguing from a formal system to existence is quite different from the reverse argumentation (from existence to formalism). Arguing from computation can take place only within the confines of this particular experience: the more constrained a mechanism, the more programmable it is (as Rosen pointed out, 1991, p. 238). Albeit, reaction is indeed programmable, even if at times it is not a trivial task. Pro-active characteristics make for quite a different task. The most impressive success stories so far are in the area of modeling and simulation. To give only one example: Chances are that your laptop (or any other device you use) will one day fall. The future state–stress, strain, depending upon the height, angle, weight, material, etc.–and the current state are in a relation that most frequently does not interest the user of such a portable device. It used to be that physical models were built and subjected to tests (this applies, for instance, to cars as well as to photo cameras). We can model, and thus to a certain point anticipate, the effects of various possible crashes through simulations based on finite-element analysis. That anticipation itself, in its full meaning, is different in nature from such simulations passes without too much comment. The kind of model we need in order to generate anticipations is a question to which we shall return. 3.2 A Rapidly Expanding Area of Inquiry An exhaustive analysis of the database of the contributions to fundamental and applied research of anticipation reveals that this covers a wide area of inquiry. In many cases, those involved are not even aware of the anticipatory theme. They see the trees, but not yet the forest. More telling is the fact that the major current directions of scientific research allow for, or even require, an anticipatory angle. The simulation mentioned above does not anticipate the fall of the laptop; rather, it visualizes–conveniently for the benefit of designers, engineers, production managers, etc.–what could happen if this possibility were realized. From this possibilistic viewpoint, we infer to necessary characteristics of the product, corresponding to its use (how much force can be exercised on the keyboard, screen, mouse, etc.?) or to its accidental fall. That is, we design in anticipation of such possibilities. Or we should! I would like to mention other examples, without the claim of even being close to a complete list. 3.2.1 An Example from Genetics But more than Rosen, whose work belongs rather to the meta-level, it was genetics that recovered the terminology of heredity. Having done so, it established a framework of implicit anticipations grounded in the genetic program. Of exceptional importance are the resulting medical alternatives to the “fix-it” syndrome of healthcare practiced as a “car repair” (including the new obsession with spare parts and artificial surrogates). Genetic medicine, as slow in coming as it is, is fundamentally geared towards the active recognition of anticipatory traits, instead of pursuing the reactive model based on physical determinism. Although there is not yet a remedy to Huntington’s disease, myotonic dystrophy, schizophrenia, Alzheimer’s disease, or Parkinson’s disease, medical researchers are making progress in the direction of better understanding how the future (the eventual state of diagnosed disease) co-relates to a present state (the unfolding of the individual in time). In the language of medicine, anticipation describes the tendency of such hereditary diseases to become symptomatic at a younger age, and sometimes to become more severe with each new generation. We now have two parallel paths of anticipation: one is that of the disorder itself, i.e., the observed object; the other, that of observation. The elaborations within second-order cybernetics (von Foerster, 1976) on the relation between these paths (the classical subject-object problem) make any further comment superfluous. The convergence of the two paths, in what became known as eigen behavior (or eigen value), is of interest to those actively seeking to transcend the identification of genetic defects through the genetic design of a cure. After all, a cure can be conceived as a repair mechanism, related to the process of anticipation. 3.2.2 Art, Simulacrum, Fabrication That art (healing was also seen as a special type of art not so long ago), in all its manifestations, including the arts of writing (poetry, fiction, drama), theatrical performance, and design–driven by purpose (telos) and in anticipation of what it makes possible–incorporates anticipatory features might be accepted as a metaphor. But once one becomes familiar with what it means to draw, paint, compose, design, write, sing, or perform (with or without devices), anticipation can be seen as the act through which the future (of the work) defines the current condition of the individual in the process of his or her self-constitution as an artist. What is interesting in both medicine and art is that the imitation can result only in a category of artifacts to be called simulacrum. In other words, the mimesis approach (for example, biomimesis as an attempt to produce organisms, i.e., replicate life from the inanimate; aesthetic mimesis, replicating art by starting with a mechanism such as the one embodied in a computer program) remains a simulacrum. Between simulacra and what was intended (organisms, and, respectively, art) there remains the distance between the authentic and the imitation, human art and machine art. They are, nevertheless, justified in more than one aspect: They can be used for many applications, and they deserve to be valued as products of high competence and extreme performance. But no one could or should ignore that the pragmatics of fabrication, characteristic of machines, and the pragmatics of human self-constitution within a dynamic involving anticipation are fundamentally different. 3.2.3 Learning (Human and Machined-Based) Learning–to mention yet another example–is by its nature an anticipatory activity: The future associates with learning expectations and a sui generis reward mechanism. These are very often disassociated from the context in which learning takes place. That this is fundamentally different from generating predictive models and stimulating competition among them might not be totally clear to the proponents of the so-called computational learning theory (COLT), or to a number of researchers of learning–all from reputable fields of scientific inquiry but captive to the action-reaction model dominant in education. It is probably only fair to remark in this vein that teaching and learning experiences within the machine-based model of current education are not different from those mimicked in some computational form. Computer-based training, a very limited experience focused on a well defined body of information, can provide a cost-efficient alternative to a variety of training programs. What it cannot do is to stimulate and trigger anticipatory characteristics because, by design, it is not supposed to override the action-reaction cycle. 3.2.4 Reward Alternatively, one can see promise in the formalism of neural networks. For instance, anticipation of reward or punishment was observed in functional neuroanatomy research (cf. Knutson, 1998). Activation of circuitry (to use the current descriptive language of brain activity) running from the medial dorsal thalamus through the anterior cingulate and mesial prefrontal cortex was co-related not to motor response but to personality variations. Accordingly, it is quite tempting to look at such mechanisms and to try to introduce reward anticipation in neural networks procedures as a method of increasing the performance of artificially mimicked decision-making. Homan (1997) reports on neural networks that “can anticipate rewards before they occur, and use these expectations to make decisions.” The focus of this type of research is to emulate biological processes, in particular the dopamine-based rewarding mechanism that lies behind a variety of goal-oriented mechanisms. Dynamic programming supports a similar objective. It focuses on states; their dynamic reassessment is propagated through the neural network in ways considered similar to those mapped in the successful enlisting of brain capabilities. Training, as a form of conditioning based on anticipation, is probably complementary to what one would call instinct-based (or natural) action. 3.2.5 Motion Planning Animation and robot motion planning, as distant from each other as they appear to some of us, share the goal of providing path planning, that is, to find a collision-free path between an initial position (the robot’s arm or the arm of an animated character) and a goal position. It is clear that the future state influences the current state and that those planning the motion actually coordinate the relation between the two states. In predictive programs, anticipation is pursued as an evaluation procedure among many possibilities, as in economics or in the social sciences. The focus changes from movement (and planning) to dynamics and probability. A large number of applications, such as pro-active error detection in networks, hard-disk arm movement in anticipation of future requests, traffic control, strategic games (including military confrontation), and risk management prompted interest in the many varieties under which anticipatory characteristics can be identified. 3.3 Aspects of Anticipation At this point, where understanding the difference between anticipation as a natural entailment process and embodying anticipatory features in machine-like artifacts meet, it is quite useful to mention that expectation, prediction, and planning–to which others add forecasting and guessing–are not fully equivalent to anticipation, but aspects of it. Let us also make note of the fact that we are not pursuing distinctions on the semantic level, but on the pragmatic–the only level at which it makes sense to approach the subject. 3.3.1 Expectation, Prediction, Forecast The practical experience through which humans constitute themselves in expectation of something–rain (when atmospheric conditions are conducive), meeting someone, closing a transaction, etc.–has to be understood as a process of unfolding possibilities, not as an active search within a field of potential events. Expectation involves waiting; it is a rather passive state, too, experienced in connection with something at least probable. Predictions are practical experiences of inferences (weak or strong, arbitrary or motivated, clear-cut or fuzzy, explicit or implicit, etc.) along the physical timeline from past to the future. Checking the barometer and noticing pain in an arthritic knee are very different experiences; so are the outcomes: imperative prediction or tentative, ambiguous foretelling. To predict is to connect what is of the nature of a datum (information received as cues, indices, causal identifiers, and the like) experienced once or more frequently, and the unfolding of a similar experience, assumed to lead to a related result. It should be noted here that the deterministic perspective implies that causality affords us predictive power. Based on the deterministic model, many predictive endeavors of impressive performance are succesfully carried out (in the form of astronomical tables, geomagnetic data, and calculations on which the entire space program relies). Under certain circumstances (such as devising economic policies, participating in financial markets, or mining data for political purposes), predictions can form a pragmatic context that embodies the prediction. In other words, a self-referential loop is put in place. Not fundamentally different are forecasts, although the etymology points to a different pragmatics, i.e., one that involves randomness. What pragmatically distinguishes these from predictions is the focus on specific future events (weather forecasting is the best known pragmatic example, that is, the self-constitution of the forecaster through an analytic activity of data acquisition, processing, and interpretation, whose output takes very precise forms corresponding to the intended communication process). These events are subject to a dynamics for which the immediate deterministic descriptions no longer suffice. Whether economic, meteorological, geophysical (regarding earthquakes, in particular), such forecasts are subject to an interplay of initial conditions, internal and external dynamics, linearity, and nonlinearity (to name only a few factors) that is still beyond our capacity to grasp, moreover to express in some efficient computational form. Although forecasts involve a predictive dimension, the two differ in scope and in the specific method. A computer program for predicting weather could process historic data (weather patterns over a long period of time). Its purpose is global prediction (for a season, a year, a decade, etc.). A forecasting algorithm, if at all possible, would be rather local and specific: Tomorrow at 11:30 am. Dynamic systems theory tells us how much more difficult forecasting is in comparison with prediction. Our expectations, predictions, and forecasts co-constitute our pragmatics. That is, they participate in making the world of our actions. There is formative power in each of them. Although expecting, predicting, and forecasting good weather will not bring the sun out, they can lead to better chances for a political candidate in an election. Indeed, we need to distinguish between categories of events to which these forms of anticipation apply. Some are beyond our current efforts to shape events and will probably remain so; others belong to the realm of human interaction. Recursion would easily describe the self-referential nature of some particular anticipations: expected outcome = f(expectation). That such cases basically belong to the category of indeterminate problems is more suspected than acknowledged. Mutually reinforcing expectations, predictions, and forecasts are the result of more than one hypothesis and their comparative (not necessarily explicit) evaluation. This model can be relatively efficiently implemented in genetic computations. 3.3.2 Plans, Design, Management Plans are the expression of well or less well defined goals associated with means necessary and sufficient to achieve them. They are conceived in a practical experience taking place under the expectation of reaching an acceptable, optimal, or high ratio between effort and result. Planning is an active pursuit within which expectations are encoded, predictions are made, and forecasts of all kind (e.g., price of raw materials and energy sources, weather conditions, individual and collective patterns of behavior, etc.) are considered. Design and architecture as pragmatic endeavors with clearly defined goals (i.e., to conceive of everything that qualifies as shelter and supports life and work in a “sheltered” society: housing, workplace, various institutions, leisure, etc.) are particular practical experiences that involve planning, but extend well beyond it, at least in the anticipatory aesthetic dimension. Every design is the expression of a possible future state–a new chip, a communication protocol, clothing, books, transportation means, medicine, political systems or events, erotic stimuli, meals–that affects the current state–of individuals, groups, society, etc.–through constitution of perceived and acknowledged needs, expectations, and desires. The dynamics of change embodied in design anticipations is normally higher than that of all other known human practical experiences. Policy, management, and prevention (to name a few additional aspects or dimensions of anticipation) involve giving advance thought, looking forward, directing towards something that as a goal influences our actions in reaching it. All these characteristics are part of the dictionary definitions of anticipation. The various words (such as those just referred to) involved in the scientific discourse on anticipation, i.e., its various meanings, pertain to its many aspects; but they are not equivalent. 3.4 Resilience It is probably useful to interrupt this account of the many ways through which anticipation penetrates the scientific agenda and to invoke a distinction that, in the beginning, defies our acquired understanding of anticipation, at least along the distinctions made above. In a deceptively light presentation, Postrel (1997) suggests a counterdistinction: resilience vs. anticipation. If the subject were only what distinguishes Silicon Valley from the Boston area, both known as regions of technical innovation and fast economic growth, the two elements invoked–predictable weather patterns, and earthquakes, anything but predictable–we would not have to bother. However, her article presents the political theory of a proficient political scholar, Wildawski (1988), focused on meeting the challenge of risk through anticipation, understood as planning that aspires to perfect foresight, or through resilience, a dynamic response based on providing adjustments. The definitions are quite telling: “Anticipation is a mode of control by a central mind; efforts are made to predict and prevent potential dangers before damage is done. . . . Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back.” Not surprising is the inference that “anticipation seeks to preserve stability: the less fluctuation, the better. Resilience accommodates variability. . . .” We seem to have here a reverse view of all that has been presented so far: Anticipation means to see the world as predictable. But it also qualifies anticipation as being quite inappropriate within dynamic systems, that is, exactly where anticipation makes a difference! Rapid changes, especially unexpected turns of events, seem the congenial weakness of anticipation in this model. (Those critical of the evolution theory refer to punctuated equilibrium, i.e., fast change for which evolution theory has yet to produce a convincing account.) Hubristic central planning and over-caution can undermine anticipation. This view of anticipation would also imply that it cannot be properly pursued within open systems or within transitory processes–again, where we could most benefit from it. Resilience depends on spontaneity, serendipity, on the unforeseeable. Wildavsky expressed this in rather sweeping statements: “. . . not only markets rely on spontaneity; science and democracy do as well. . . .” Computations of risk are, of course, also part of the subject of anticipation. 3.5 Synchronization Yet another element of this methodological overview (far from being complete) is synchronization. It can serve here as a terminological cue, or, to recall Rosen (1991), co-temporality or simultaneity would do. In the canonical description of anticipation–the current state of the system is defined by a future state–one aspect of time, sequentiality or precedence (one instant precedes the other) takes over. Yet in the universe of simultaneous events, we encounter anticipation, not only as it refers to space aspects, but as it takes the form of synchronization mechanisms. Whether in genetic mechanisms, in musical perceptions (where temporality is definitory), or in the perception of the world (I have already mentioned above the way in which the visual and the auditory “map” are brought in sync, the so-called binding problem, i.e., integration of sensory information arriving on different channels), to name just a few, the coordination mechanism is the final guarantor of the system’s coherent functioning. As a synchronization mechanism, anticipation means to “know” (the quotation marks are used to identify a way of speaking) when relatively unrelated, or even related, events have to be integrated in order to make sense. It is therefore helpful to consider this particular kind of anticipation as the result of the work of a “conductor” (or switch, for those technically inclined) eliciting the various sound streams originating from independent sources, each operating within its own confines, to merge in a synchronized concert. Cognitively, this means to ensure that what is synchronous in the world is ultimately perceived as such, although information arrives asynchronously in the brain. Synchronization, as opposed to precedence, is not tolerant of error. Precedence is less restrictive: The cold temperatures that might affect the viability (survival) of a deciduous tree, and the cycle of days and night affected by the cycle of seasons allow for a range. This is why leaves fall over a relatively long time, depending upon tree kinds and configurations (lone trees, groves, forests, etc.). So we learn that not only is there a variety of soft-defined forms of anticipation (weather prediction, even after data collection, processing, and interpretation have made spectacular advances, is as soft as soft gets), but also that there are high precision mechanisms that deserve to be accounted for if we expect to understand, and moreover make use of, anticipatory technologies. 3.6 Some Working Hypotheses 3.6.1 Rosen’s Model Rosen distinguishes the difference between the dynamics of the coupled given object system S and the model M; that is, the difference between real time in S and the modeling time of M (faster than that of S) is indicative of anticipation. True, time in this particular description ceases to be an objective dimension of the world, since we can produce quite a variety of related and unrelated time sequences. He also remarks that the requirement of M to be a perfect model is almost never fulfilled. Therefore, the behavior of such a coupled system can only be qualified as quasi-anticipatory (in which E represents effectors through which action is triggered by M within S); cf. Fig. 1. Fig. 1 Rosen’s model As aspects of this functioning, Rosen names, rather ambiguously, planning, management, and policies. Essential here are the parametrization of M and S and the choice of the model. The standard definition, quoted again and again, is that an anticipatory system “contains a predictive model of itself and/or of its environment, which allows it to change state at an instant in accord with the model’s predictions pertaining to a later instant” (Rosen 1985, p. 339). The definition is not only contradictory–as Dubois (1997) noticed–but also circular–anticipation as a result of a weaker form of anticipation (prediction) exercised through a model. Much more interesting are Rosen’s examples: “If I am walking in the woods and I see a bear appear on the path ahead of me, I will immediately tend to vacate the premises”; the “wired-in” winterizing behavior of deciduous trees; the biosynthetic pathway with a forward activation. Each sheds light on the distinction between processes that seem vaguely correlated: background information (what could happen if the encounter with the bear took place, based on what has already happened to others); the cycle of day and night and the related pattern of lower temperatures as days get shorter with the onset of autumn; the pathway for the forward activation and the viability of the cell itself. What is not at all clear is how less than obvious weak correlations end up as powerful anticipation links: heading away from the bear (”I change my present course of action, in accordance with my model’s prediction,” 1985, p. 7) usually eliminates the danger; loss of leaves saves the tree from freezing; forward activation, as an adaptive process, increases the viability of the cell. We have a “temporal spanning,” as Rosen calls it. In his example of senescence (”an almost ubiquitous property of organisms,” “a generalized maladaptation without any localizable failure in specific subsystems,” 1985, p. 402), it becomes even more clear that the time factor is of essence in the biological realm. 3.6.2 Inclusive Recursion (the Dubois Path) Dubois (1997, p. 4) is correct in pointing out that this approach is reminiscent of classical control theory. He submits a formal language of inclusive (or implicit) recursion, more precisely, of self-referential systems, in which the value of a variable at a later time (t+1) explicitly contains a predictive model of itself (p. 6): x(t+1) = f[x(t), x(t+1), p), p] (1a) In this expression, x is the state variable of the system, t stands for time (present, t–1 is the past, t+1 is the future), and p is a control parameter. Dubois starts from recursion within dynamical discrete systems, where the future state of a system depends exclusively on its present and past x(t+1) = f[... x(t–1), x(t), x(t+1), p] (1b) He further defines incursion, i.e., an inclusive or implicit recursion, as x(t+1) = f[... x(t–2), x(t–1), x(t), x(t+1), ..., p] (2) and exemplifies its simplest case as a self-referential system (cf. 1a and 1b). The embedded nature of such a system (it contains a model of itself) explains some of its characteristics, in particular the fact that it is purpose (i.e., finality, or telos) driven. Having provided a mathematical description, Dubois further reasons from the formalism submitted to the mechanism of anticipation: The dynamic of the system is represented by D S/D t = [S(t+D t) – S(t)]/ D t = F[S(t), M(t+D t)] (3) That of the predictive model is: D M/D t = [M(t+D t) – M(t) = G[M(t)] (4) In order to avoid the contradiction in Rosen’s model, Dubois suggests that D M/D t = [M(t+D t) – M(t)]/ D t = F[S(t), M(t+D t)] (5) Obviously, what he ascertains is that there is no difference between the system S and the anticipatory model, the result being D S/D t = [S(t+D t) – S(t)]/ D t = F[S(t), S(t+D t)] (6) which is, according to his definition, an incursive system. That Rosen and Dubois take very different positions is clear. In Rosen’s view, since the “heart of recursion is the conversion of the present to the future” (1991, p. 78), and anticipation is an arrow pointing in the opposite direction, recursions could not capture the nature of anticipatory processes. Dubois, in producing a different type of recursion, in which the future affects the dynamics, partially contradicts Rosen’s view. Incursion (inclusive or implicit recursion) and hyperincursion (an incursion with multiple solutions) describe a particular kind of predictive behavior, according to Dubois. Building upon McCulloch and Pitts (1943) formal neuron and taking von Neumann’s suggestion that a hybrid digital-analog neuron configuration could explain brain dynamics, Dubois (1990, 1992) submitted a fractal model of neural systems and furthered a non-linear threshold logic (with Resconi, 1993). The incursive map x(t) = 1 – abs(1–2x(t+1)) (7) where “abs” means “the absolute value” and in which the iterated x(t) is a function of its iterate at a future time t+1, can subsequently be transformed into a hyper-recursive map: 1 – 2x(t+1) = ± (1–x(t)) (8) so that x(t+1) = [1 ± x(t)–1]/2 (9) It is clear that once an initial condition x(0) is defined, successive iterated values x(t+1), for t=0,1,2,…T, produce two iterations corresponding to the ± sign. In order to avoid the increase of the number of iterated values, i.e., in order to define a single trajectory, a control function u(T–k) is introduced. The resulting hyperincursive process is expressed through x(t+1) = [1 + (1–2u(t+1))(x(t)–1]/2 = x(t)/2 + u(t+1) – x(t) · u(t+1)(10) It turns out that this equation describes the von Neumann hybrid version through the x(t) as a floating point variable and the control function u(t) as a digital variable, accepting 0 and 1 as values, so that the sign + or – result from Sg = 2u(t) – 1, for t=1, 2,…T (11) It is tempting to see this hybrid neuron as a building block of a functional entity endowed with anticipatory properties. Let me add here that Dubois has continued his work in the direction of producing formal descriptions for neural net applications, memory research, and brain modeling (1998). His work is convincing, but, again, it takes a different direction from the work pursued by Rosen, if we correctly understand Rosen’s warning (1991) concerning the non-fractionability of the (M, R)-system, i.e., its intrinsic relational character. Nevertheless, Dubois’ results will be seen by many as another suggestion that the hybrid analog/digital computation better reflects the complexity of the living and thus might support effective information processing for applications in which the living is not reduced to the physical. 3.6.3 Space-Based Computation Cellular automata, as discrete space-time models, constitute yet another way of modeling anticipation as a space-based computation. More details can be found in the work of Holmberg (1997), who introduces the concept of spatial automata and correctly positions this approach, as well as some basic considerations on the nature of anticipation in technological applications, within systems theory. Not surprisingly, the community of researchers of anticipation is generating further working hypotheses (Julià 1998; Sommer, 1998, addressing intentionality and learnability, respectively). It is very difficult to keep a record of all of these contributions, and even more difficult to comment on works in their incipient phase. Applications of fundamental theoretical anticipatory models are also being submitted in increasing numbers. Dubois himself suggested quite a number of applications, including robotics and neural machines. My focus is on variable configuration computers (regardless of the nature of computation). Obviously, those and similar attempts (many in the program of the CASYS conferences) are quite different from training in various sports, sports performance (think about anticipation in fencing!), political action, the functioning of the judicial system, the dissemination of writing rules for achieving suspense, the automatic generation of jokes (Barker, 1996), the building of economic models, and so on. 3.6.4 Dynamic Competing Models Without attempting to submit a full-fledged alternative to either Rosen’s or Dubois’ anticipation descriptions, I will only mention once more that my own work speaks in favor of a changing set of models and of a procedure for maintaining competition among them. Fig. 2 Changing models and competition among models Since a diagram is a formalism of sorts, not unlike a mathematical or logical expression, I also reason from it to the dynamics of the system. The diagram ascertains that anticipation implies awareness, and thus processes of interpretation—hence semiotic processes. Mathematical or logical descriptions do not explicitly address awareness, but rather build upon it as a given. Some scientists subsequently commit the error of assuming that because awareness is not explicitly encoded in the formulae, it plays no role whatsoever in the system described. As we shall see in the discussion of the non-local nature of anticipation, quantum experiments suggest that in the absence of the observer, our descriptions of the universe make no sense. 3.6.5 Variability and Computation To make things even more challenging, there are instances in which anticipation, resulting from the dynamics of natural evolution, is subject to variability, i.e., change. In every game situation, anticipations are at work in a competitive environment. Chess players, not unlike “black-box” traders on the financial or stock markets, as well as professional gamblers, could provide a huge amount of testimony regarding “anticipation as a moving target.” In my model of an anticipation mechanism based on a changing number of models and on stimulating competition among them, games can serve as a source of information in the validation process. The mathematics of game theory, not unlike the mathematics of ALife formal descriptions applied to trading mechanisms or to flocking behavior, is in many respects pertinent to questions of anticipation. What is not explicitly provided through the ever expanding list of application examples is the broad perspective. Indeed, when the performing musician of a well known musical score seeks an expression that deviates from the expected sound (without being unfaithful to the composer), we have anticipation at work: not necessarily as a result of an understanding of its many implications, rather as a spontaneously developed means of expression. Many similar anticipation-based characteristics are recognizable in the practical human experience of self-constitution in competitive situations, in survival instances (some action performed ahead of the destructive instant), in the interpretation of various types of symptoms. After all, the immune system is one of the most impressive examples of the (M,R) models that Rosen describes. It is in anticipation of an infinity of potential possible factors that affect the organism during its unfolding from inception to death. The metabolism component and the repair component, although different, are themselves co-related. From the perspective opened by the subject of anticipation, it is implausible that a cure for a deficient immune system will be found in any place other than its repair function. In contradistinction, as we shall see, when one searches for information on the World-Wide Web, there is anticipation involved in the mechanism of pre-fetching information that eventually gives the user the feeling of interactivity, even though what technology makes possible is a simulacrum. The question to be asked, but not necessarily answered in this paper, is: To what extent does becoming aware of anticipation, or living in a particular anticipation (of a concert, of a joke, or of an inherited disease), affect our practical experiences of self-constitution, regardless of whether we build a technology inspired by it or only use the technology, or to what extent are such experiences part of the technology? Friedrich Dürrenmatt, the Swiss writer, once remarked (1962, in a play entitled The Physician Sits), “A machine only becomes useful when it has grown independent of the knowledge that led to its discovery.” This statement will follow us as we get closer to the association between anticipation and computation. It suggests that if we are able to endow machines with anticipatory characteristics (prediction, expectancy, planning, etc.), chances are that our relation to such machines will eventually become more natural. This might change our relation to anticipation altogether, either by further honing natural anticipation capabilities or by effecting their extinction. The broader picture that results from the examination of what actually defines the field of inquiry identifiable as anticipation–in living systems and in machines–is at best contradictory. To be candid, it is also disconcerting, especially in view of the many so-called anticipation-based claims. But this should not be a discouraging factor. Rather, it should make the need for foundational work even more obvious. One or two books, many disparate articles in various journals, plus the Proceedings of the Computing Anticipatory Systems (CASYS) conferences do not yet constitute a sufficient grounding. It is with this understanding in mind that I have undertaken this preliminary overview (which will eventually become my second book on the subject of anticipation). Since the time my book (1991) was published, and even more after its posting on the World-Wide Web, I have faced colleagues who were rather confused. They wanted to know what, in my opinion, anticipation is; but they were not willing to commit themselves to the subject. It impressed them; but it also made them feel uneasy because the solid foundation of determinism, upon which their reputations were built, and from which they operate, seemed to be put in question. In addition, funding agencies have trouble locating anticipation in their cubbyholes, and even more in providing peer reviews from people willing to jump over their shadow and entertain the idea that their own views, deeply rooted in the paradigm of physics and machines, deserve to be challenged. My research at Stanford University–which constituted the basis for this report–provided a stimulating academic environment, but not many possible research partners. Students in my classes turned out to be far more receptive to the idea of anticipation than my colleagues. The summary given in this section stands as a testimony to progress, but no more than that, unless it is integrated in the articulation of research hypotheses and models for future development. 4 Minds, Knowledge, Computation–a Borgesian Horizon The anticipatory nature of the mind–and by this I mean the processes of mind constitution as well as mind interaction–together with the understanding of anticipation as a distributed characteristic of the human being, represents an epistemological and cognitive premise. Let us put these ascertainments in the broader perspective of knowledge–the ultimate goal of our inquiry (knowledge at work included, of course). Niels Bohr (1934), well ahead of the illustrious founders of second-order cybernetics or of today’s constructivist model of science, risked a rather scandalous sentence: “It is wrong to think that the task of physics is to find out how nature is.” He went on to claim that “Physics concerns what we can say about nature.” In this vein, we can say that Rosen and others have proven that anticipation is a characteristic of natural processes. We can also take this description and try to make it the blueprint of various applications (some of which were reported above). 4.1 Computation and Prolepsis Computation is the dominant aspect of the Weltanschauung today. It is not only a representation, but also the mechanism for processing representations (for which reason I call the computer a semiotic engine). The attempt to reduce everything there is to computation is not new. Science might be rigorous, but it is also inherently opportunistic. That is, those constituting themselves as scientists, (i.e., defining themselves in pragmatic endeavors labeled as science) are human beings living in the reality of a generic conflict between goals and means. Having said this, well aware that Feyerabend (1975) et al articulated this thought even more obliquely, I have to add that anticipation as computation is, from an epistemological perspective, probably more appropriate to our understanding of the concept than what various pre-computation disciplines had to say or to speculate about anticipation. Between Epicurus’ (cf. 1933) term prolepsis–rule, or standard of judgment (the second criterion for truth)–and the variety of analytical interpretations leading to the current infatuation with anticipation, there is a succession of epistemological viewpoints. It is not that background knowledge–”the idea of an object previously acquired through sensations” to which Epicurus referred as a necessary condition for understanding–changed its condition from a criterion of truth to a computational entity. After all, computer systems used in speech recognition or in vision involve a proleptic component. (The machine is trained to recognize something identified as such.) Rather, the pragmatic framework changed, and accordingly we constitute ourselves as researchers of the world in which we live by means of computation rather than by means used in Epicurus’ physics and corresponding theory of knowledge (the canon, as it is known). What I want to say is that computation and the subsequent attempt to see anticipation as computation are but another description of the world and, particularly in the latter case, of our attempts to form an effective body of knowledge about it. In his discussion of prolepsis, in Critique of Pure Reason, Kant (1781) saw it within his description of the world, that is, in the form of “something that can be known a priori.” In Kant’s view, only the “property of possessing a degree” is subject to anticipation. Indeed, in computation we can attach certain weights to various data before the data are actually input. These weights will affect the result and, in many cases, the art; that is, the appropriateness of specifying weights influences predictions and forecasts. But no one would infer à rebours that Kant saw the world as a computation, or that knowledge was the result of a computational process. 4.2 Evolutionary Computation The substratum of basic principles on which a theory of anticipation relies (Epicurus, Kant, Rosen, etc.) affects the theory itself, and thus its possible technological implementations. It has not actually been convincingly demonstrated that we can compute anticipation. What has been accomplished, again and again, is the embodiment of anticipatory characteristics, such as prediction, expectation, management, planning, etc., in computer programs. What has also been carried out is the implementation of control mechanisms, and, bringing us closer to our subject, the modeling of selection mechanisms in the now well known genetic computing models inspired by the guiding Darwinian concept. Evolutionary computation might well end up displaying anticipatory characteristics if we take the time and the knowledge needed to apply ourselves to the task. It will not be a spontaneous birth, rather a designed and carefully executed computation. Entailment might prove the critical element, as Rosen’s work seems to indicate. 4.2.1 Co-Relation vs. Computation Once a modeling relation is established between a natural system and a formal one, we can start inferring from the formal system to the natural. Let me mention that here we are in the territory of views that often contradict each other. (For instance, Daniel Dubois and myself are still in dialog over some of the examples to follow.) Neural networks or models of ALife, such as the simulation of collections of concurrently interacting agents, qualify as candidates for such an exercise. However, almost no effort has been made to elucidate the functioning of the causal arrow from the future to the present. In winter, temperatures will fall below the freezing point; leaves fall from deciduous trees in anticipation, but the trigger comes from a different process, i.e., the diminishing length of daylight, which stands in no direct causal relation to the phenomenon mentioned yet again. This is a co-relation of processes, not a computation, or at least not a Turing machine-based computation. The migration of birds is another example; yet others are the immune system, the sleep mechanism, the blinking mechanism, and the behavior of Pfiesteria (the single-cell microorganisms that produce deadly toxins in anticipation of the fish they will eventually kill). But if we want to stick to computation, which is a description different from the one pursued until now, we land in a domain of parallel processes, not very sophisticated, probably even less sophisticated than the level of a UNIX operating system, but of a much higher order of magnitude. We are in what was described as a big numbers-based reality. If we could control the process “shorter days,” we could eventually graph the inter-relation among the various components at work leading to the shedding of leaves during autumn, or to the sophisticated patterns of behavior of birds preparing for migration. 4.3 Large Numbers and Simple Processes In respect to brain activity, things are definitely more complicated, but they also fall in the realm of incredibly large numbers applying to rather simple entities and processes. The ongoing CAM-Brain Project (Hugo de Garis, 1994) is supposed to result in an artificial brain of one billion neurons (compare this to the 100 to 120 billion neurons of a wet brain implemented) on Field Programmable Gate Arrays. These digital circuits can be reconfigured as the tasks at hand might require. The notion of reconfiguration elicits our understanding of anticipation. Still, it remains to be seen whether the artificial brain will actually drive a robot or only simulate the robot’s functioning, as it also remains to be seen whether evolutionary patterns will support vision, hearing, their binding, coordinated movements, and, farther down the line, decision-making. The mind in anticipation of events (as I defined mind) is a lead. If we could parametrize the cognitive process and control the various channels, we could in principle learn more about how neuroactivity precedes moving one’s hand by 800 milliseconds, and what the consequences of this forecast for human anticipation abilities are. These are all possible experiments, after each of which we will end up not only with more data (the blessing and curse of our age!), but also necessarily with the desire to gain a better understanding of what these data mean. If Rosen’s hypothesis that anticipation is what distinguishes the biological realm (life) from the physical world, it remains to be seen whether we can do more than to compute only particular aspects of it–prediction, expectation, planning, etc.–outside the living. Pseudo-anticipation is already part of our practical experience: satellite launches, virtual surgery, pre-fetching data in order to optimize networks are but three examples of effective pseudo-anticipation. If we could create life, we could study how anticipation emerges as one of its irreducible, or only as one of its specific, properties. Short of this, ALife is involved in the simulation of lifelike processes. Rosen, in defining complexity as not simulatable, comes close to Feynman’s (1982) hope that one can best study physics by actually conducting the calculations of the world of physics on the physical entities to be studied. One can call this epistemological horizon Borgesian, knowing that an ideal Borgesian map was none other than the territory mapped. At this point, we need to arrive at a deeper understanding of what we want to do. Regardless of the metaphor, the epistemological foundation does not change. The knowing subject is already shaped by the implicit anticipatory dimension of mind interaction; in other words, the answer to the question meant to increase our knowledge is anticipated. Computation is as adequate a metaphor as we can have today, provided that we do not expect the metaphor to automatically generate the answers to our many questions. Regardless, the question concerning anticipation in the living and in the non-living is far from being settled, even after we might agree on a computational model or expand to something else, such as co-relation, which could either transcend computation or expand it beyond Turing’s universal machine. 5 Revisiting Non-Locality I took it upon myself to approach these matters well aware that I am advancing in mined territory. Comparisons notwithstanding, such was the situation faced by the proponents of quantum theory. To nobody’s surprise, Einstein took quantum mechanics, as developed by Heisenberg, Schrödinger, Dirac, et al, under scrutiny, and, well before the theory was even really established, raised objections to it, as well as to Bohr’s interpretation. From these objections (the complete list is known as the EPR Paper, 1935, for Einstein, Podolski, and Rosen), one in particular seems connected to the subject of anticipation. Einstein had a major problem with the property of non-locality–the correlations among separated parts of a quantum system across space and time. He defined such correlations as “spooky actions at distance” (”spukhafte Fernwirkungen”), remarking that they have to take place at speeds faster than that of light in order to make various parts of the quantum system match. In simple terms, this spooky action at distance refers to the links that can develop between two or more photons, electrons, or atoms, even if they are remotely placed in the world. One example often mentioned is the decay of a pion (a subatomic particle). The resulting electron and positron move in opposite directions. Regardless how far apart they are, they remain connected. We notice the connection only when we measure some of their properties (well aware of the influence measurement has), their spin, for example. Since the initial pion had no spin, the electron and the positron will have opposite sense spins, so that the net spin is conserved at zero. So, at distance, if the spin of the electron is clockwise, the spin of the positron is counter-clockwise. It would be out of place to enter here into the details of the discussion and the ensuing developments. Let me mention only that in support of the EPR document, Bohm (1951) tried, through his notion of a local hidden variable, to find a way for the correlations to be established at a speed lower than that of light. He wanted to save causality within quantum predications. Bohm’s attempt recalls what the community of researchers is trying to accomplish in approaching aspects of anticipation (such as prediction, expectation, forecast, etc.) with the idea that they cover the entire subject. Bell (1964, 1966) produced a theorem demonstrating that certain experimental tests could distinguish the predictions of quantum mechanics from those of any local hidden variable theory. (Incidentally, physicist Henry P. Stapp, 1991 characterized Bell’s theorem as “the greatest discovery of all science.”) Again, this recalls by analogy Rosen’s position, according to which anticipation is what (among other things) distinguishes the living from the rest of the world. It states that we can clearly discern a particular aspect of anticipation provided in some formal description or in some computer implementation from one that is natural. I mention these two episodes from a history still unfolding in order to explain that what we say in respect to nature–as Bohr defined the goal of physics–will be ultimately subjected to the test of our practical experiences. Einstein has been proven wrong in respect to his understanding of non-locality through many experiments that baffle our common sense, but his theory of relativity still stands. Spooky actions at distance are a very intuitive description of how someone educated in the spirit of physical determinism and thinking within this spirit understands how the future impacts the present, or how anticipation computes backwards from the future to the present. He, like many others, preached the need for learning “to see the world anew,” but was unable to position himself in a different consciousness than the one embodied in his theory. As I worked on this text (more precisely, after reworking a draft dated July 22, 1999), Daniel Dubois graciously drew my attention to a number of his research accomplishments pertinent to the connection between anticipation and non-locality. Indeed, over the last seven years, he has applied his mathematical formalism to quite a number of computational aspects of anticipation. Consequently, he was able to establish, by means of incursion and hyperincursion, that the computation pertinent to the membrane neural potential (used as a model of a brain) “gives rise to non-locality effects” (Dubois, 1999). His argument is in line with von Neumann’s analogy between the computer and the brain. But we are not yet beyond a first analogy (or reference). Non-locality is, in the last analysis, distance independent. Furthermore, non-locality is not a limited characteristic of the universe, but a global rule. In the words of Gribbin (1998), non-locality “cuts into the idea of the separateness of things.” If the “no-signaling” criterion (energy or information travel no faster than the speed of light) protects the “chain of cause and effect,” (effects can never happen before their causes), non-locality ensures the coherence of the universe. Reconciliation between non-locality and causality might therefore be suggestive for our understanding of anticipation. In such a case, the co-relation among elements involved in anticipation can be seen as a computation, but one different in nature from a digital computer, i.e., in a Turing machine. It follows from here that anticipation understood as co-relation–a notion we will soon focus on–must be a computation different in type than that embodied in a Turing machine. 5.1 Quantum Semiotics, Link Theory, Co-Relation Let me preface this section ascertaining that anticipation is a particular form of non-locality, which is quite different from saying that there is non-locality in anticipation. (This is what actually distinguishes my thesis from the results of Dubois.) More precisely, its object is co-relations (over space and time) resulting from entanglements characteristic of the living, and eventually extending beyond the living, as in the quantum universe. These co-relations correspond to the integrated character of the world, moreover, of the universe. Our descriptions ascertain this character and are ultimately an active constituent of this universe. We introduce in this statement a semiotic notion of special significance to the quantum realm: Sign systems not only represent, but also constitute our universe. As with qubits (information units in the quantum universe), we can refer to qusigns as particular semiotic entities through which our descriptions and interpretations of quantum phenomena are made possible. 5.1.1 The Semiotic Engine As a semiotic engine (Nadin, 1998), a digital computer processes a variety of possible descriptions of ourselves and of the universe of our existence. These descriptions can be indexical (marks left by the entity described), iconic (based on resemblance), or symbolic (established through convention). Anticipatory computation is based on the notion that every sign is in anticipation of its interpretation. Signs are not constituted at the object level, but in an open-ended infinite sign process (semiosis). In sign processes, the arrow of time can run in both directions: from the past through the present to the future, or the other way around, from the future to the present. Signs carry the future (intentions, desires, needs, ideals, etc., all of a nature different from what is given, i.e., all in the range of a final cause) into the present and thus allow us to derive a coherent image of the universe. Actually, not unlike the solution given in the Schrödinger equation, a semiosis is constituted in both directions: from the past into the future, and from the future into the present, and forward into the past. The interpretant (i.e., infinite process of sign interpretation) is probably what the standard Copenhagen Interpretation of quantum mechanics considered in defining the so-called “intelligent observer.” The two directions of semiosis are in co-relation. In the first case, we constitute understandings based on previous semiotic processes. In the second, we actually make up the world as we constitute ourselves as part of it. This means that the notion of sign has to reflect the two arrows. In other words, the Peircean sign definition (i.e., arrow from object to representamen to interpretant) has to be “reworded”: Fig. 3 Qusign definition The language of the diagram allows for such a “rewording” much better than so-called natural language: The interpretant as a sign refers to something else anticipated in and through the sign. (Peirce’s original definition of sign is, “something which stands to somebody in some respect or capacity,” 2.228.) Qusigns are thus the unity between the analytical and the synthetic dimension of the sign; their “spin” (to borrow from the description of qubits) can well describe the particular pragmatics through which their meaning is constituted. 5.1.2 Knowing in Advance The 1930 Copenhagen Interpretation of quantum mechanics (developed primarily by Bohr and Heisenberg) should make us aware of the fact that observation (as in the examples advanced by Rosen, et al), measurement (as in the evaluation of learning performance of neural networks), and descriptions (such as those telling us how a certain software with anticipatory features works) are more pertinent to our understanding of what we observe, measure, or describe than to understanding the phenomena from which they derive. To measure is to describe the dynamics of what we measure. The coherence we gain is that of our own knowledge, where dynamics resides as a description. Albeit, the anticipation chain takes the path of something that smacks of backward causality, which the established scientific community excluded for a long time and still has difficulty in understanding. Quantum particle “tunneling”–a phenomenon related to quantum uncertainty and to wave-particle duality–might explain our own existence on the planet, but we still don’t know what it means (as Feynman repeatedly stated it, verbally and in writing, 1965). Quite a number of experiments (cf. Raymond Chiao, University of California-Berkeley; Paul Kwiat, University of Innsbruck; Aephraim Steinberg, US National Institute of Standards and Technology, Maryland, among others) ended up confirming that “the way in which a photon starting out on its journey behaves” in different experimental set-ups suggests that anticipation is at work in the quantum realm. They behave (cf. Gribbin, 1999) as if they “knew in advance what kind of experiment they were about to go through.” In view of these experiments, Rosen would have a hard time trying to argue that anticipation is a property exclusive of the living. Moreover, we find in such examples the justification for quantum semiotics: “The behavior of the photons at the beam-splitter is changed by how we are looking at them, even when we have not yet made up our minds about how we are going to look at them. The computer-controlled pseudo-random layout of the device used in the experiment is anticipated by the photon,” (Gribbin and Chimsky, 1996). In other words, it is an interpretant process. I should mention here that within the relatively young field of mathematical research called link theory, a framework that generalizes the notion of causality is established in a way that removes its unidirectionality (cf. Etter, 1999). The relational aspect of this theory makes it a very good candidate for a closer look at anticipation, in particular, at what I call co-relations. 5.1.3 Coupling Strength In various fields of human inquiry, the clear-cut distinction between past, present, and future is simply breaking down. No matter how deep and broad grudges against a reductionist physical model (such as Newton’s) are, Newtonian dynamics is reversible in time, and so is quantum mechanics. The goal of producing a “unified” description of the universe can be justified in more than one way, but regardless of the perspective, coupling strength is what interests us, that is, what “holds” the “universe” together. This applies to the coherence of the human mind, as it applies to monocellular organisms or to the cosmos at large. It might be that anticipation, in a manner yet unknown to us, plays a role in the coupling of the many parts of the universe and of everything else that appears as coherent to us. Galileian and Newtonian mechanics advanced answers, which were subsequently reformulated and expressed in a more comprehensive way in the theory of relativity (special and general), and afterwards in quantum theories (quantum mechanics, quantum field theory, quantum gravity). In the mechanical universe, to anticipate could mean to pre-compute the trajectory of the moving entity seen as constitutive of broad physical reality. But the causal chain is so tight that the fundamental equation allows only for the existence of recursions (from the present to the future), which we can represent by stacks and compute relatively easily. The past is closed; the future, however, is open, since we can define ad infinitum the coordinates of the changing position of a moving entity. No guesswork: Everything is determined, at least up to a certain level of complexity. Relativity does not do away with the openness of the future, but makes it more difficult to grasp. Within black holes, inherent in the relativistic description but not reducible to them, time is cyclic. In Einstein’s curved space-time, a circular “time-line” (Etter’s pun) is no more surprising than a “circle around a cylinder in ordinary space.” This, however, leads to a cognitive problem: how to accommodate a cycle with openness. Anticipation related to this description of time is quite different from that which might be associated with a physical-mechanical description. 5.2 Possible and Probable Quantum theories, as we have suggested, pose even more difficult questions in regard to non-locality, and thus to entanglement. In this new cognitive territory, things get even more difficult to comprehend. Determinism, which means that something is (1) or is not (0) caused by something else, gives way to a probabilistic and/or possibilistic distribution: Something is caused probably (i.e., to a certain degree expressed in terms of probability, that is, statistic distribution) by something else. Or it is caused possibly (in Zadeh’s sense, 1977), which is a determination different from probability (although not totally unrelated), by something else. Probabilistic influences can be represented through a transition matrix. Given the relation between two entities A and B and their respective states, we can define a Markov chain, i.e., a transition matrix whose ijth entry is the probability of i given j. Such a chain tells us how influences are strung together (chained) and can serve as a predictive mechanism, thus covering some subset of what we call anticipation. Recently, weather satellite observations of the density of green vegetation in Africa (an indication of rainfall) were connected through such processes to the danger of an outbreak of Rift Valley Fever, in which Linthicum (1999) devised a metrics based on climate indicators for a forecasting procedure. The “black boxes” chained in such processes have a single input and a single output representing the complete state variable of the system as it changes over time. Climate and health (the risk of malaria, Hanta virus, cholera) are related in more than one way (Epstein, 1999). These examples are less probabilistic than possibilistic. If we pursue possibilities, that is, infer from a determined set of what is possible, a different form of prediction can eventually be achieved. Abductive inferences belong to this category and are characteristic of functional diagnosis procedures. Here we have an example of semiotics at work, i.e., abductions on symptoms, not really far from what Epicurus meant by prolepsis. 5.2.1 Linked Incursions For the aspects of anticipation that belong to a non-deterministic realm, we can further try to link descriptions of the form y = f(x) or z = g(w) (12a, b) Indeed, if we substitute y for w, our descriptions become y = f(x) and z = g(y), that is, z = g(f(x)) (13a, b, c) The result is a functional relation of the composed functions. Without going into the details of Etter’s theory, let me suggest that it can serve as an efficient method for encoding a variety of relations (not only in the case of the identity of two variables). If in the functional description we substitute not the variables (w with y, as shown in the example given above) but the relation between them, we reach a different level of relational encoding that can better support modeling. I even suggest that recursions, incursions, and hyperincursions can be defined for co-related events. For example:’ x(ti+1) = f[x(ti), x (ti +1), p] (14) y(tj+1) = g[y (tj), y(tj+1), r] (15) in which time in the two systems is obviously not the same (ti ¹ tj). A co-relation of time can be established, as can a co-relation among the states x(ti) and y(tj) or the two systems, through the intermediary of a third system acting as the “conductor,” or coordinator, z(ti, tj, tk), i.e., dependent upon both the time in each system and its own time metrics. To elaborate on the mathematics of linked incursions goes beyond the intentions of this paper. Let us not forget that we are pursuing an analysis of the particular ways in which anticipation takes place in the successive unified descriptions of the universe produced so far. 5.2.2 Alternative Computations In the quantum perspective of a double identity–particle and wave–trajectory is the superposition of every possible location that a moving entity could conceivably occupy. This is where recursivity, in the classic sense, breaks down. I suspect that Dubois was motivated to look beyond recursivity for improved mathematical tools, to what he calls incursion and hyperincursion, for this particular reason. But I also suspect that linked incursions and hyperincursions will eventually afford more results in dealing with various aspects of anticipation and non-locality. In respect to the explicit statement, prompted by quantum mechanics non-locality, that anticipation could be a form of computation different from that described by a Turing machine, it is only in the nature of the argument to say that a full-fledged anticipation, not just some anticipatory characteristics (prediction, planning, forecasting, etc.) is probably inherent in quantum computation. Rosen recognized early on (1972) that quantum descriptions were a promising path, although among his publications (even more manuscripts belong to his legacy, cf. 1999) there are no further leads in this direction. Efforts to transcend digital computing through quantum computation are significant in many ways. From the perspective of anticipation, I think Feynman’s concept comes closer to what we are after: understanding the quantum dynamics not by using a digital computer (as in the tradition of reductionist thinking), but by making use of the elements involved in quantum interactions. As the situation is loosely described: Nature does this calculation all the time! The same thing can be said about protein folding, a typical anticipatory process–a small increase in energy (warming up) drives the folding process back, only in order to have it repeated as the energy decreases. This process might also well qualify as an anticipatory computation, with a particular scope, not reducible to digital computation. (As a matter of fact, protein folding exceeds the complexity of digital computation.) It is an efficient procedure, this much we know; but about how it takes place we know as little as about anticipation itself. 5.2.3 Anticipation as Co-Relation (Or: Co-relation as Anticipation?) Having advanced the notion of anticipation as a co-relation, I would like to point to instances of co-relation that are characteristic of experiences of practical human self-constitution in fields other than the much researched control theory of mechanisms, economic modeling, medicine, networking, and genetic computing. There is, as Peat (undated) once remarked, a strong concern with “a non-local representation of space” in art and literature. The integration of many viewpoints (perspectives) of the same event illustrates the thought. Reconstruction (in the perception of art and literature) means the realization of a future state (describable as understanding or as coordination of the aesthetic intent with the aesthetic interpretation) in the current state of the dynamic system represented by the work of art or of writing, and by its many interpreters (open-ended process). In Descartes’ and Newton’s traditions, space and time are local: a taming of artistic expression took place. Peat claims that the “tableau,” i.e., the painting, becomes a snapshot in which “motion and change is frozen in a single instant of time. This is a form of objectivity which the concert, the novel, and the diarist express.” With the advent of relativity and quantum physics, many perspectives are overlaid. As Peat puts it, “In our century, painting has returned to the non-local order.” This holds true for writing (think about Joyce), as well as it does for the dynamic arts (performance, film, video, multimedia). Complementary elements, entangled throughout the unifying body of the work or of its re-presentation, are brought into coherence by co-relations within non-locality-based interactions. Peat goes on to show that communication “cries our for a non-local” description: source and receiver cannot be treated as separable entities. (They are linked, as he poetically describes the process, “by a weak beam of coherent light.”) Meaning—which “cannot be associated exclusively with either participant” (n.b., in communication)—could be “said to be ‘non-local’.” 6 The Relational Path to Co-Relations That computation, in one of its very many current forms or in a combination of such forms (such as hybrid algorithmic-nonalgorithmic computations), can embody and serve as a test for hypotheses about anticipation should not surprise. Neither should the use of computation imply the understanding that anticipation is ultimately a computation, that it is the only form, or the appropriate form, through which we can implement anticipation-based notions. It is an exciting but dangerous path: If everything is described as a computation—no matter how different computation forms can be—then nothing is a computation, because we lose any distinguishing reference. Epistemologically, this is a dead end. Furthermore, it has not yet been established whether information processing is a prerequisite of anticipation or only one means among many for describing it. While we could, in principle, embody anticipatory features in computer programs, we might miss a broad variety of anticipation characteristics. For instance, progress was made in describing the behavior of flocks (cf. The Swarm Simluation System at the Santa Fe Institute). But bird migration goes far beyond the modeled behavioral interrelationships. Trigger information differentials, group interaction, learning, orientation, etc. are far more sophisticated than what has been modeled so far. The immune system is yet another example of a complexity level that by far exceeds everything we can imagine within the computational model. Be all this as it may, our current challenge is to express co-relations, which appear as predefined or emerging relations in a dynamic system, by means of information processing in some computational form, or by means of describing natural entanglements. If we could reach these goals, we would effect a change in quality–from a functional to a relational model. Here are some suggestions for this approach. 6.1 Function and Relation Relations between two or among several entities can be quite complicated. A solid relational foundation requires the understanding of what distinguishes relation from function. For all practical purposes, functions (also called mappings) can be linear or non-linear. (Of course, further distinctions are also important: They can be many or single-valued, real or complex-valued, etc.) Relations, however, cover a broader spectrum. A relation of dependence (or independence) can be immediate or intermediated. It can involve hierarchical aspects (as to what affects the relation more within a polyvalent connection), as well as order or randomness. Relations, not unlike functions, can be one-to-one, one-to-many, many-to-one, many-to-many. We can define a negation of a relation, a double negation, inverse relation, etc. A full logic of relations has not been developed, as far as I know. Rudimentary aspects are, however, part of what after Peirce (1870, 1883) and Schröder (The Circle of Operation of Logical Calculus, 1877) became known as a logic of relations. Russell and Whitehead (Principia Mathematica, 1910) made further clarifications. Let us assume a simple case: xRy, in which x stands in relation to y (son of, higher than, warmer than, premise of, etc.). If we consider various aspects of the world and describe them as relationally connected, we can wind up with statements such as xR1y, zR2w, etc. In this form, it is not clear that Ri exhausts all the relations between the related entities; neither is it clear to what extent we can establish further relations between two relations Ri and Rj and thus eventually infer from their interrelationship new relations among entities that did not have an apparent relation in the first place. In a wide sense, a relation is an n-ary (n=1, 2, 3….) “connection”; a binary relation is a particular case and means that the relation xRy is true or false for a pair x,y in the Cartesian product XxY. As opposed to functions, for which we have relatively good mathematical descriptions, relations are more difficult to encode, but richer in their encodings. Their classification (e.g., inverse relation, reflexive, symmetric, transitive, equivalent, etc.) is important insofar it leads to higher orders (e.g., a reflexive and transitive relation is called a pre-ordering, while an ordering is a reflexive, transitive, and antisymmetric relation). 6.1.1 N-ary Relations If we revisit some of the examples of anticipation produced so far in the literature–Rosen’s deciduous trees, Peat’s communication as a non-local unifying process, Linthicum’s and Epstein’s metrics of weather data and disease patterns, the cognitive implications of the many competing models from which one is eventually instantiated in an action, or the hyperincursion mechanism developed by Dubois (to name but a few)–it becomes obvious that we have chains of n-ary relations: xRin y (in which Rin is a specific Ri n-ary relation); that is, in a given situation, several relations are possible, and from all those possible, some are more probable than others. To anticipate means to establish which co-relations, i.e., which relations among relations are possible, and from those, which are most probable. Anticipation is a process. It takes place within a system and we interpret it as being part of the dynamics of the system. Observed from outside the system–deciduous trees lose their leaves, birds migrate, tennis players anticipate the served ball–anticipation appears as goal-driven (teleologic). In particular, coherence is preserved through anticipation; or a different coherence among the variables of a situation is introduced (such as playing chess, or predicting market behavior). Pragmatically, this results in choices driven by possibilities, which appear as embodied in future states. The tennis ball is served and has to be returned in a well defined area–and this is an important constraint, an almost necessary condition for the game ever to take place! At a speed of over 100 miles per hour, the served ball is not returned through a reaction-based hit, but as a result of an anticipated course of action, one from among many continuously generated well ahead of the serve or as it progresses. If the serving area is increased by only 10%, chances for anticipation are reduced in a proportion that changes the game from one of resemblance and order to a chaotic, incoherent action that makes no competitive sense. The competition among the various models (all possibilities, but along a probability distribution corresponding to the particular style of the serving player) allows for a successful return, itself subject to various models and competition among them. The whole game can be seen as an unfolding chain of co-relations, i.e., a computation controlled by a range of acceptable parameters. The immune system works in a fundamentally similar fashion. Co-relations corresponding to a wide variety of acceptable parameters are pursued on a continuous basis. Acclimatization, i.e., the way humans adapt to changes in seasons, is but a preservation of the coherence of our individual and collective existence under the influence of anticipated changes in temperature, humidity, day-night cycle, and a number of other parameters, some of which we are not even aware. 6.1.2 Instantiated Co-Relations But having given the example of an unfolding sequence does not place us in the domain of non-locality. For this we need to distinguish between the diachronic and synchronic axes. A strictly deterministic explanation will always place the anticipated in the sequence of cause-and-effect/action-reaction. The tennis ball is served, days are getting shorter, a virus causes an infection–all seen as causes. In the anticipatory view, the ball is actually not yet served as the sequence of models, from among which one will become the return, started being generated. The anticipation leading to the fall of leaves is the result of a co-relation involving more than one parameter. What appears as a reaction of the immune system is actually also a co-relation involving the metabolism and self-repair function. On the one hand, we have an unfolding over time; on the other, a synchronic relation that appears as an infinitely fast process. In reality we have a co-relation, an intertwining of many relations among a huge number of variables of which we are only marginally, if at all, aware. Assuming that we have a good description of the n-ary relations R1n, R2n,… Rin, moreover that we can even “relate” relations of a different order (n=3 vs. n=4, for instance), and express this relation in a co-relation, it becomes clear that co-relations are descriptive of higher order relations. For example, two binary relations are identical when their converses are identical. In any sequences of the form xRiy, zRjw, uRkv, etc. we are trying to identify what the relation is among the various relations Ri, Rj, Rk, etc., represented by Ri Ra Rj, Rj Rb Rk, etc. The co-relations, Ra , Rb , Rg (e.g., son of and daughter of correspond to progeny, but among the co-relations, we will find similarity or distinction, among other things) can apply to the subsets of all Ri (i=1,… n) sharing a certain distinctive characteristic (such as similarity). We can further define referents (Ref) and relata (Rel), as well as a relation between referents or relata denoted as Sg (Sagitta, i.e., arrow). By no accident, the arrow can graphically suggest a dynamics from the present to the future (prediction), or the other way around, from the future to the present (anticipation). After Peirce, Tarski (1941) produced an axiomatized theory of relation that, not unlike Boolean logic, could serve as a basis for effective computations of relations and co-relations. It is quite possible that the computation of co-relation could be built around the formalism of quantum computing. In this case, we would operate on the value of the entanglement, not on the state of a particle. It is a task that invites further work. Last but not least, we invite the thought of considering relations among incursions and hyperincursions as a means of testing their descriptive power even more deeply. 6.2 Making Use of the Co-Relation Model Having advanced this model of anticipation as a form of computation, based on the dynamic generalization of models and on competition among them, and encoded in a formalism that captures co-relations (thus the spirit of non-locality), I would like to present some examples speaking in favor of an understanding of anticipation that occasionally comes close to what I have proposed above. These are not direct applications of the theory I have advanced so far, rather they are suggestive of its possible directions, if not of its meaning. 6.2.1 Anticipatory Document Caching Incidentally, anticipatory document caching with the purpose of reducing latency on Web transactions is introduced in a language reminiscent of Einstein’s observation, “Everyone talks about the speed of light but nobody ever does anything about it.” The reason for the provocative introduction is obvious: interactive HTML (i.e., text transmission through the Web) requires at least T-1 connection speeds (i.e., 1.5M bps). Once images are used, the requirement increases to T-3 lines (45M bps). Cross-country interactive screen images push the limit to 155M bps. Places such as the major cities on the West Coast of the USA (San Francisco, Los Angeles) are at least 85 milliseconds away from cities on the East Coast (Boston, New York). Interactivity under the limitations of the speed of light–assuming that we can send data at such speed and on the shortest path–is an illusion. In view of this practical observation, those involved in the design of networks, of communication protocols, of client-server access and the like are faced with the task of reducing the time between access request and delivery. Among the methods used are the utilization of inter-request bandwidth (transfer of unrequested files when no other use is made), proactive requests (preloading a client or intermediate cache with anticipated requests), optimization of topology (checking where files will be best used, combining identical requests and responses over shared links). What Touch et al (1992, 1996, 1998) accomplished is an effective procedure for providing co-relations. Evidently, they realize that such correlations cannot rely on a second channel through which requests would travel faster than the information itself. Accordingly, they initiate processes in fact independent of the communication between the client and the remote server. Such processes facilitate an anticipatory behavior based on predictive cues corresponding to the searched information. They also define where in a network of such optimization servers should be placed. I insist upon this mechanism of implementation not only because of its significance for the networked community, but primarily in view of the understanding that anticipatory computation is one of producing meaningful co-relations. The entanglement between the search process and pre-fetching data is stricto sensu a pseudo-anticipation. But so are all other implementations known to date. These are all models of possible actions, and it is quite practical to think of generating even more models as the user gets involved in a certain transaction. 6.2.2 Software Design The same idea was implemented by high-end 3D modeling software (e.g., UNIGRAPHICS), under the guidance of a better understanding of what designers can and would do at a certain juncture in visualizing their projects. The use of computation resources within such programs makes for the necessity to anticipate what is possible and to almost preclude functions and utilities that make no sense at a certain point. This is realized through a STRIM function. Instead of allowing the program to react to any and all possible courses of action, some functions are disabled. Henceforth, the functions essential to the task can take advantage of all available resources. (This is what STRIM makes possible.) It is by all practical means a pro-active concept based on realizing the co-relations within the various components of the program. 6.2.3 Agents Coordination Another aspect of co-relation is coordination. It can be ascertained that cooperative activities can take place only if a minimum of anticipation–in one or several of the forms discussed so far–is provided. This applies to every form of cooperation we can think of: commerce, work on an assembly line (where anticipation is built in through planning and control mechanisms), the pragmatics of erecting a building, the performing arts, sports. Coordination is a particular embodiment of anticipation. It can be expressed, for instance, in requirements of synchronization defined to ensure that from a set of possibilities the optimum is actually pursued. Thus, in a given situation, from a broad choice of what is possible, what is optimal is accomplished. The goal is to maximize the probability of successful cooperation. This is achieved by implementing anticipatory characteristics. I would like to mention here as an example the Robo Cup world champion, designed and implemented by Manuela Veleso, Peter Stone, and Michael Bowling (of Carnegie Mellon University). This is an autonomous agent collaboration with the purpose of achieving precise goals (in this case, winning a soccer game between robotic teams) in a competitive environment. Stated succinctly in the words of the authors, “Anticipation was one of the major differences between our team and the other teams,” (1998). Let us focus on this aspect and briefly describe the solution. What was accomplished in this implementation is a model of an unfolding soccer game. But instead of the limited action-reaction description, the authors endowed the “players” (i.e., agents) with the ability to maximize their contributions through anticipatory movements corresponding to increasing the team’s chance to execute successful passes leading to scoring. It is a relational approach: Agents are placed in co-relation (”taking into account the position of the other robots–both teammates and adversaries”) and in respect to the current and possible future positions of the ball. It is evidently a multi-objective description, that is, a dynamic set of models, with what the authors call “repulsion and attraction points.” The anticipation algorithm (SPAR, Strategic Positioning with Attraction and Repulsion) contains weighted single-objective decisions. Correctly assuming that transitions among states (i.e., choices among the various models) for each of the cooperating agents takes time (computing cost, in a broader sense), the authors implement the anticipatory feature in the form of selection procedures. The goal is to increase (ideally, to find the maximum) the probability of future collaboration as the game unfolds. The agents are given a degree of flexibility that results in adjustment supposed to enhance the probability of individual actions useful to the team. Additionally, an algorithm was designed in order to allow the “players” (team agents) to position themselves in anticipation of possible collaboration needs among teammates. Individual action and team collaboration are coordinated in anticipation (i.e., predictive form) of the actions of the opponents. At times, though, the anticipatory focus degrades to reactive moves. Less successful in the competition, but inspired by Rosen’s definition, the team of the University of Caen (France) defined the following program: “Anticipation allows the consideration of global phenomena that cannot be treated through a local reactive approach. The anticipation of the actions of the adversary or of its teammates, the anticipation of the change of the other teamplayers’ roles, the anticipation of the ball’s movements, and the anticipation of conflicts among teammates are some of the forms of anticipation that our system tries to account for,” (Stinckwich, Girault, 1999). 6.2.4 Auto-Associative Memories Along the same line of thought, it is worth mentioning that in the area of cognitive sciences, neural architectures involving auto-associative memories are used in attempts to implement anticipatory characteristics. Such memories reproduce input patterns as output. In other words, they mimic the fact that we remember what we memorize, which in essence we can describe through recursive or, better yet, incursive functions. The association of patterns of memorized information with themselves is powerful because, in remembering, we provide ourselves part of what we are looking for; that is, we anticipate. The context is supportive of anticipation because it supports the human experience of constituting co-relations. We can apply this to computer memory. Instead of memory-gobbling procedures, which hike the cost of computation and affect its effectiveness, auto-associative memory suggests that we can better handle fewer units, even if these are of a bigger size. Jeff Hawkins (1999), who sees “intelligence as an ability … to make successful predictions about its input,” i.e., as an internal measure of sensory prediction, not as a measure of behavior (still an AI obsession) applied his pattern classifier to handprinted-character recognition. The Palm Pilot�™ might sooner than we think profit from the anticipatory thought that went into its successful handwriting recognition program that Hawkins authored. 6.3 Interactivity Such and similar examples are computational expressions of the many aspects of anticipation. Their interactive nature draws our attention towards the very telling distinction between algorithmic and interaction computation. In algorithmic computation, we basically start with a description (called algorithm) of what it takes to accomplish a certain task. The computer–a Turing machine–executes a single thread operation (the van Neumann paradigm of computation) on data appropriately formatted according to syntactic constraints. As such, the process of computation is disconnected from the outside world. Accordingly, there is no room for anticipation, which always results from interaction. In the interactive model, the outside world drives the process: Agents react to other agents; robots operate in a dynamic environment and need to be endowed with anticipatory traits. Searches over networks, not unlike airline ticket purchasing and other interactive tasks, are driven by those who randomly or systematically pursue a goal (find something or let something surprise you). As Peter Wegner (1996), one of the proponents of interactive computation expresses it, “Algorithms are ’sales contracts’ that deliver an output in exchange for an input. A marriage contract specifies behavior for all contingencies of interaction (’in sickness and health’) over the lifetime of the object (’till death do us part’).” The important suggestion here is that we can conceive of object-based computation in which object operations (two or more) share a hidden state. Fig. 4 Interactive computation: the shared state None of the operations (or processes) are algorithmic, since they do not control the shared state, but participate in an interaction through the shared state. They are also subject to external interaction. What is of exceptional importance here is that the response of each operation to messages from outside depends on the shared state accessed through non-local variables of operations. The non-locality made possible here corresponds to the nature of anticipation. Interactive systems are inherently incomplete, thus decidable in Gödel’s sense (i.e., not subject to Gödelian strictures in respect to their consistency). Interactivity requires that the computation remain connected to the practical experiences of human self-constitution, i.e., that we overcome the limitations of syntactically limited processing, or even of semantic referencing, and reach the pragmatic level. Processes in this kind of computation are multi-threaded, open-ended, and subject to predictive or not predictive interactions. The Turing machine could not describe them; and implementation in anticipatory computing machines per se is probably still far away. This brings up, somehow by association, the question of whether the category of artifacts called programs are anticipatory by design or by their condition. The question is pertinent not only to computers, since in the language of modern genetics, programming (as the encoding of DNA, for example) plays an important role. It is, however, obvious that silicon hardware (as one possible embodiment of computers) and DNA are quite different, not only in view of their make-up, but more in view of their condition. If birds are “programmed” for their migratory behavior, then these “programs” are based on entailment schemes of extreme complexity. The same applies even more to the immune system. 6.3.1 Virtual Reality A special category of interactive computation is represented by virtual reality implementations, all intrinsically pseudo-anticipatory environments of multi-sensorial condition. In the virtual domain, a given set of co-relations can be established or pursued. Entanglement is part of the broader design. Various processes are triggered in a confined space-and-time, i.e., in a subset of the world. Non-locality is a generic metaphor in the virtual realm made possible by the integration of the human subject. Sure, as we advance towards molecular, biological, and genetic computation–where the distinction between real and virtual is less than clear-cut–we reach new levels of pragmatic integration. Evolutionary computation will probably be driven by the inherent anticipatory characteristic of the living. As designs of computation processes at the chromosome level are advanced, a foundation is laid for computation that involves and facilitates self-awareness. Interaction at this level goes deeper than interaction embodied in the examples mentioned above; that is, at this level, mind-interaction-like mechanisms are possible, and thus true anticipation (not just the pseudo type) emerges as a structural property. We are used to the representation of anticipatory processes through models that have a higher speed than the systems modeled: A rocket launch is anticipated in the simulation that “runs” ahead of the real time of the launch. The program anticipates, i.e., searches for all kinds of correlations–the proper functioning of a very complex system consisting of various elements tightly integrated in the whole. We have here, not unlike the case of data pre-fetching, or of integration through search in a space of possibilities, or of auto-associative memory, a mechanism for ensuring that co-relations are maintained above and beyond the deterministic one-directional temporal chain. The more interesting bi-directional chain is not even imaginable in such applications. The spookiness of anticipatory computation is not only reducible to the speed of interactions that worried Einstein. It also involves a bi-directional time arrow. The account given in this paper, which simultaneously occasioned the advancement of my own model, identifies the many perspectives of the possible frontier in science represented by the subject of anticipation. 7. Conclusion In order to ascertain anticipatory computation as an effective method, working models that display anticipatory characteristics need to be realized. The examples given herein can be seen as the specs for such possible models. Work in alternative computing models is illustrative of what can be done and of the return expected. Co-relations, difficult to deal with once we part from the world of first-order objects, are another promising avenue, as are possibilistic-based computations. Finally, if quantum effects prove to take place also in a world of large scale, anticipation, as entanglement (i.e., co-relation), might turn out to be the binding substratum of our universe of existence. ReferencesBarker, M. (1996) developed a class based on How to Write Horror Fiction, by William F. Nolan. Bartlett, F.C. (1951). Essays in Psychology. Dedicated to David Katz, Uppsala: Almqvist & Wiksells, pp. 1-17. Bell, John S. (1964). Physics, 1, pp. 195-200. Bell, John S. (1966). Review of Modern Physics, 38, pp. 447-452. Berry, M.J., I.H. Brivanlon, T.A. Jordan, M. Meister (1999). Nature 318, pp. 334-338. Bohm, David (1951). Quantum Theory, London: Routledge. Bohr, Niels (1987). Atomic Theory and Description of Nature: Four Essays with an Introductory Survey, AMS Press, June 1934. (See also The Philosophical Writings of Niels Bohr, Vol. 1, Oxbow Press. Descartes, René (1637). Discourss de la méthode pour bien conduire sa raison et chercher la vérité� dans les sciences, Leiden. Descartes, René (1644). Principia philosophiae. Dubois, Daniel (1992). Le labyrinthe de l���intelligence: de l’intelligence naturelle a l’intelligence fractale, InterEditions/Paris, Academia/Louvain-la-Neuve. Dubois, Daniel M. (1992). “The Hyperincursive Fractal Machines as a Quantum Holographic Brain,” CCAI 9:4, pp.335-372. Dubois, Daniel, G. Resconi (1992). Hyperincursivity: a new mathematical theory, Presses Universitaires de Liège. Dubois, Daniel M. (1996). “Hyperincursive Stack Memory in Chaotic Automata,” Actes du Symposium ECHO: Modèles de la boucle évolutive (A.C. Ehresmann, G.L. Farre, J-P.Vanbreemersch, Eds.), Université de Picardie Jules Verne, pp. 77-82. Dubois, Daniel M. (1999). “Hyperincursive McCullogh and Pitts Neurons for Designing a Computing Flip-Flop Memory,” Computing Anticipatory Systems: CASYS ‘98, Second International Conference, AIP Conference Proceedings 465, pp. 3-21. Dürrenmatt, Friedrich (1992). The Physician Sits, Grove Press. (Originally published as Die Physiker, 1962. A paperback English edition was published by Oxford University Press, 1965.) Einstein, Podolski, and Rosen Paper (1935). The Physical Review 47, pp. 777-780. Epicurus (1933). cf. Tallium Cicero, De Natura Decorum (Trans. Harry Rackham), Loeb Classical Library. Epstein, Paul R., K. Linthicum, et al (1999). “Climate and Health,” Science, July 16, 1999, pp. 347-348. Etter, Thomas (1999). Psi, Influence, and Link Theory, (manuscript dated June 11, 1999). Feyerabend, Paul (1973). Against Method, London: New Left Books. Feynman, Richard P. (1965). The Character of Physical Law, BBC Publications. Feynman, Richard P. (1982). “Simulating physics with computers,” International Journal of Theoretical Physics, 2:6/7: 467-488. Foerster, Heinz von (1976). “Objects, tokens for (eigen)-behaviors,” Cybernetics Forum, 5:3-4, pp. 91-96. Foerster, Heinz von (1999). Der Anfang von Himmel und Erde hat keinen Namen, Vienna: Döcker Verlag., 2nd ed. Garis, Hugo de (1994). An Artificial Brain: ATR’s CAM-Brain Project, New Generation Computing 12(2):215-221, 1994. Gribbin, John (1998). New Scientist, August 1998. Gribbin, John (1999). www.epunix.biols.susx.ac.uk/Home/John Gribbin/ Quantum Gribbin, John, Mark Chimsky (1996). Schrödinger’s Kittens and the Search for Reality: Solving the Quantum Mysteries, New York: Little, Brown & Co. Hawkins, Jeff (1999). “That’s Not How My Brain Works,” interview in Technology Review, July/August, pp. 76-79. Holmberg, Stig (1998). “Anticipatory Computing with a Spatio Temporal Fuzzy Model” Computing Anticipatory Systems: CASYS ‘97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 419-432. Homan, Christopher (1997). Beauty is a Rare Thing, www.cs.rochester.edu:80/users/facdana/cs240_Fall97/Ass7/Chris Homan Julià, Pere (1998). Intentionality, Self-reference, and Anticipation, Computing Anticipatory Systems: CASYS ’97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 209-243. Kant, Immanuel (1781). Kritik der reinen Vernunft, 1 Auflage. (cf. Critique of Pure Reason, Translated by Norman Kemp-Smith, New York: Macmillan Press, 1781.) Kelly, G.A. (1955). The Psychology of Personal Constructs, New York, Norton. Knutson, Brian (1998). Functional Neuroanatomy of Approach and Active Avoidance Behavior, http://www.gmu.edu/departments/frasnow/abstracts_frames/abs98/Knut9812. Libet, Benjamin (1989). Neural Destiny. Does the Brain Have a Mind of Its Own?” The Sciences, March/April 1989, pp. 32-35. Libet, Benjamin (1985). “Unconscious Cerebral Initiative and the Role of Conscious in Voluntary Action,” The Behavioral and Brain Sciences, vol. 8, number 4, December 1985, pp. 529-539. Linthicum, Kenneth et al (1999). “Climate and Satellite Indicators to Forecast Rift Fever Epidemics in Kenya,” Science, July 16, 1999, pp. 367-368. Mancuso, J.C., J. Adams-Weber (1982). Anticipation as a constructive process, in C. Mancuso & J. Adams-Weber (Eds.) The Construing Person, New York, Praeger, pp. 8-32. Nadin, Mihai (1988). Minds as Configurations: Intelligence is Process, Graduate Lecture Series, Ohio State University. Nadin, Mihai (1991). Mind-Anticipation and Chaos. Stuttgart: Belser Presse. (The text can be read in its entirety on the Web at www.networld.it/oikos/naminds1.htm.) Nadin, Mihai (1997). The Civilization of Illiteracy. Dresden: Dresden University Press. Nadin, Mihai (1998). “Computers,” entry in The Encyclopedia of Semiotics (Paul Bouissac, Ed.), New York: Oxford University Press, pp. 136-138. Newton, Sir Isaac (1687). Philosophiae naturalis principia mathematica. Peat, David (undated). Non-locality in nature and cognition, www.redbull.demon.co.uk/bibliography/essays/nat-cogPeirce, Charles S. (1870). “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole’s Calculus Logic,” Memoirs of the American Academy of Sciences, 9. Peirce, Charles S. (1883). “The Logic of Relatives,” Studies in Logic by Members of the Johns Hopkins University. Peirce, Charles S. (1931-1935). The Collected Papers of Charles Sanders Peirce, Vols. I-VI (C. Hartshorne and P. Weiss, Eds.), Harvard University Press. The convention for quoting from this work is to cite volume and paragraph, separated by a decimal point: 2.226. Postrel, Virginia (1997). “Reason on Line,” Forbes ASAP, August 25, 1997. Powers, William T. (1973). Behavior: The Control of Perception, Amsterdam: de Gruyter. Powers, William T. (1989). Living Control Systems, I and II (Christopher Langton, Ed.) New Canaan: Benchmark Publications. More information at www.ed.uinc.edu/csg. Rosen, Robert (1972). Quantum Genetics, Foundation of Mathematical Biology, Vol. I, Subcellular Systems. New York/London: Academic Press, 1972. Rosen, Robert (1985). Anticipatory Systems, Pergamon Press. Rosen, Robert (1991). Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life, New York: Columbia University Press. Rosen, Robert (1999). Essay on Life Itself, New York: Columbia University Press. Sommers, Hans (1998). “The Consequences of Learnability for A a priori Knowledge in a World,” Computing Anticipatory Systems: CASYS ‘97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp 457-468. Stapp, Henry P. (1991) Quantum Implications: Essays in Honor of David Bohm (B.J. Hiley & F.D. Peat, Eds.), Routledge. Stinckwich, Serge and François Girault (1999). Modélisation d’un Robot Footballeur, Memoire de DEA, Caen. See also: www.info.unicaen.fr/#girault/Memoire_dea Swarm Simulation System. See: www.swarm.org Tarski, Alfred (1941). “On the Calculus of Relations,” Journal of Symbolic Logic, 6, pp. 73-89. Touch, Joseph D. et al (1992). A Model for Latency in Communication. Touch, Joseph D. (1998). Large Scale Active Middleware.Touch, Joseph D., John Heidemann, Katia Obraczka (1996). Analysis of HTTP Performance.Touch, Joseph D. See also www.isi.edu.Veleso, Manuela, Peter Stone, Michael Bowling (1998). Anticipation: A Key for Collaboration in a Team of Agents, paper presented at the 3rd International Conference on Autonomous Agents, October 1998. Vijver, Gertrudis van de (1997). “Anticipatory Systems. A Short Philosophical Note,” Computing Anticipatory Systems: CASYS ‘97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 31-47. Wegner, Peter (1996). The Paradigm Shift from Algorithms to Interaction, draft of October 14, 1996. Wildawski, Aaron B. (1988). Searching for Safety. Zadeh, Lotfi (1977). Fuzzy Sets as a Basis for a Theory of Possibility, ERL MEMO M77/12. Posted in Anticipation, LectAnticipation, Lectures/Presentations One Response 1. 【送料無料】DILASH(ディラッシュ)デニム&ボア切替えジャケット【DL13AU055】通ࢵ Says: copyright © 2o15 by Mihai Nadin | Powered by Wordpress
a2195f77577e1cb0
Open Access Nano Express Singly ionized double-donor complex in vertically coupled quantum dots Ramón Manjarres-García1, Gene Elizabeth Escorcia-Salas1, Ilia D Mikhailov2 and José Sierra-Ortega1* Author Affiliations 1 Group of Investigation in Condensed Matter Theory, Universidad del Magdalena, Santa Marta, Colombia 2 Universidad Industrial de Santander, A. A. 678, Bucaramanga, Colombia For all author emails, please log on. Nanoscale Research Letters 2012, 7:489  doi:10.1186/1556-276X-7-489 Received:10 July 2012 Accepted:3 August 2012 Published:31 August 2012 © 2012 Manjarres-García et al.; licensee Springer. The electronic states of a singly ionized on-axis double-donor complex (D2+) confined in two identical vertically coupled, axially symmetrical quantum dots in a threading magnetic field are calculated. The solutions of the Schrödinger equation are obtained by a variational separation of variables in the adiabatic limit. Numerical results are shown for bonding and antibonding lowest-lying artificial molecule states corresponding to different quantum dot morphologies, dimensions, separation between them, thicknesses of the wetting layers, and magnetic field strength. Quantum dots; Adiabatic approximation; Artificial molecule; 78.67.-n; 78.67.Hc; 3.21.-b Quantum dots (QDs) have opened the possibility to fabricate both artificial atoms and molecules with novel and fascinating optoelectronic properties which are not accessible in bulk semiconductor materials. An attractive route for nano-structuring semiconductor materials offers self-assembled quantum dots which are formed by the Stranski-Krastanow growth mode by depositing the material on a substrate with different lattice parameters [1-5]. The electrical and optical properties of these structures may be changed in a controlled form by doping the shallow impurities whose energy levels are defined by the interplay between the reductions of the physical dimension, the Coulomb attraction, and the inter-particle correlation. Recently, it has been proposed to use the singly ionized double-donor system (D2+) confined in a single semiconductor QD [6] or ring [7] as an adequate functional part in a wide range of device applications, including spintronics, optoelectronics, photovoltaics, and quantum information technologies. This two-level system encodes logical information either on the spin or on the charge degrees of freedom of the single electron and allows us to manipulate conveniently its molecular properties, such as the energy splitting between the bonding and antibonding lowest-lying molecular-like states or the spatial distribution of carriers in the system [8-12]. One can expect that the singly ionized double-donor system (D2+) confined in vertically coupled QDs should have similar properties. In this paper, we analyze the electronic states of an artificial hydrogen molecular ion (D2+) compound by two positive ions that interchange their electron, which is constrained to exchange between two identical vertically coupled, axially symmetrical QDs in the presence of a threading magnetic field. Below, we analyze the model of two separated on-axis singly ionized donors, confined in two coaxial, vertically stacked QDs, whose identical morphologies present axially symmetrical layers whose shape is given by the dependence of the layer thickness h on the distance ρ from the axis as follows: h(ρ) = db + d0fn(ρ)ϑ(R0ρ). Here, R0 is the base radius, db is the wetting layer thickness, d0 is the maximum height of the QD over this layer, ϑ(x) is the Heaviside step function, equal to 0 for x < 0 and to 1 for x > 0, and fn(ρ) = [1 − (ρ/R0)n]1/n. The morphology is controlled in this model by means of the integer shape-generating parameter n which is equal to 1, 2, or tends to infinity for conical pyramid-like, lens-like, and disk-like geometrical shapes, respectively. As an example, the 3D image of an artificial singly ionized molecule confined in lens-like QDs is presented in Figure 1. thumbnailFigure 1. Image of the singly ionized molecule confined in lens-like QDs. Besides, we assume that the external homogeneous magnetic field B = Bẑ is applied along the quantum dot's axis. The dimensionless Hamiltonian of the single electron in this D2+ complex in the effective-mass approximation can be written as where Vc(ρ, z) is the confinement potential, equal to 0 and V0 inside and outside the QD, respectively. The last two terms in Equation 1 correspond to the attraction between electron and ions. The effective Bohr radius a0* = ℏ2ε/m*e2, the effective Rydberg Ry* = e2/2εa0*, and γ = eB/2m*cRy* have been taken above as units of length, energy, and the conventional dimensionless magnetic field strength, respectively. As both donors are located at the axis, the potential is axially symmetrical, the angular momentum Lz commutes with the Hamiltonian, and the corresponding eigenvalues give us one good quantum number m. At this representation, the Hamiltonian (Equation 1) cylindrically coordinates only on two coordinates: Taking into account that the thickness of QDs is typically much smaller than their lateral dimension and therefore the electron motion in the first direction is much faster than in-plane motion, one can use the advantage of the adiabatic approximation [13] in which the wave function is presented as a product of two functions: where the first function f(ρ, z) describes the fast motion in z direction and satisfies the wave equation with ‘frozen out’ radial coordinate ρ, while the radial part of the wave function is found in the second step from the equation In our numerical procedure, we solve Equation 4 repeatedly for each value ρ by using the trigonometric sweep method [13] in order to restore the unknown function Ef(ρ). Once this function is found, then the energies Em of the molecular complex can be established by solving Equation 5. As the potential V(ρ, z) for each fixed value of ρ presents an even function V(ρ, − z) = V(ρ, z) with respect to the variable z corresponding to a symmetrical (no-rectangle) quantum well, then all solutions of Equation 4 can be arranged in two sets: odd solutions f(ρ, − z) = − f(ρ, z) and even solutions f+(ρ, − z) = f+(ρ, z), called antibonding and bonding states, respectively. These sets of functions can be found as the solutions of the boundary value problems corresponding to the differential Equation 4 within the range 0 < z < ∞ with the frontier conditions <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>. Results and discussion We have performed numerical calculations of two-electron renormalized energies Em as a function of the magnetic flux and for QDs with different morphologies, dimensions, and separation between layers in order to analyze the Aharonov-Bohm and the quantum size effects. We consider for our simulations the In0.55Al0.45As/Al0.35 Ga0.65As structures with the following values of physical parameters: dielectric constant ε = 12.71, the effective mass in the dot region and the region outside the dot for the electron m * = 0.076m0, the conduction and the valence band offset in junctions is V0 = 358meV, the effective Bohr radius a0* ≈ 10nm, and the effective Rydberg Ry* ≈ 5meV. First, we calculate the energies of the molecular complex as functions of the magnetic field in disk-like, lens-like, and cone-like vertically coupled QDs and in a single one-electron QR with smooth non-homogeneity of the surface. Results for vertically coupled QDs with the heights d0 = 4nm, the wetting layer thicknesses db = 1nm, radii R0 = 20nm, and the separation between them d = 6nm are shown in Figure 2. thumbnailFigure 2. Energies as functions of the magnetic field of a D2+ in vertically coupled quantum dots. (Heights 4 nm, wetting layer thicknesses 1 nm, radii 20 nm, and separation between them 6 nm). It is seen that in all cases, the energy levels are very sensitive to the magnetic field and their dependencies on the magnetic field strength exhibit multiple crossovers and reordering. Comparing these dependencies for the disk, the lens, and the cone in Figure 2, one can also observe a successive increase of the number of crossovers and the lowering of the region energies where such crossovers occur. It is related to the variation of the electron probability distribution inside and around their InAs layers, which is similar to charge distribution in a metallic surface when its geometry varies from the flat to the spiked-type one. Such variation of the probability distribution is a consequence of the stronger confinement in structures with spiked-type QD geometry where the electron-ion separation is defined by interplays between the electrostatic interaction between them and the strong structural confinement, making it more stable with respect to the external magnetic field and the ring-like electron probability density distribution. Therefore, the energy dependencies for cone-like QDs have a shape similar to those that exhibit structures with ring-like geometry known as the Aharonov-Bohm effect. The Aharonov-Bohm effect observed usually in ring-like heterostructures is a manifestation of the competition between the paramagnetic and diamagnetic terms in the Hamiltonian, resulting in the oscillation of the ground state energy. Such oscillations are impossible in the disk-like structures because of a significant decrease of the diamagnetic term contribution as the magnetic field increases and the electron probability distribution becomes more contracted. In QDs with a spike-like morphology, the electron probability density is already strongly confined, the external magnetic field can no longer decrease more the diamagnetic term contribution, and the energy dependencies on the increasing magnetic field become similar to those of ring-like structures. In Figure 3, we present results of the calculation of the density of electronic states in the zero-magnetic field for QDs with three different morphologies on the left side case γ = 0 and on the right side for γ = 0.8. It is seen that the density of electronic states in the case of the zero-magnetic field for the disk-like structure has a larger value in the region of the low-lying energy levels and it decreases successively while the morphology becomes more and more spike-liked. It is due to the fact that the electron confinement in the disk is weaker than that in the lens and that in the lens is weaker than that in the cone. thumbnailFigure 3. Density of the electronic states for a D2+ in vertically coupled quantum dots. (Heights 3 nm, wetting layer thicknesses 2 nm, radii 20 nm, and separation between them 6 nm for two different values of the magnetic field (γ = 0) and (γ = 0.8)). Also, it is seen that the lowest peak corresponding to the ground bonding state in the cone-like structure is more significantly separated from other excited states than in two other structures. It is due to the stronger confinement of the electron in the cone-like structure where the electron is mainly located nearer to the donor than in disk-like and lens-like structures. Comparing the densities of states presented on the left and right sides of Figure 3, one can see remarkable modifications that suffer the corresponding curves. Particularly, in the disk-like structure, the presence of the magnetic field provides a displacement of the peaks at the region of the low-lying energies. In the lens-like and cone-like structures, the modification is inversed; the peaks are reorganized in such a way that their distribution becomes almost homogeneous. Redistribution of the peaks' positions in the lens is defined mainly by the additional confinement that provides the external magnetic field, while analogous redistribution in other two spike-liked structures is mainly due to the Aharonov-Bohm effect. In short, we propose a simple numerical procedure for calculating the energies and wave functions of a singly ionized molecular complex formed by two separated on-axis donors located at vertically coupled QDs in the presence of the external magnetic field. Our calculation includes some important characteristics of the heterostructure such as the presence of the wetting layer and the possibility of the variation of the QD morphology. The curves of the energy dependencies on the external magnetic field for the disk-like, lens-like, and cone-like structures are presented. We find that the effect of the in-plane confinement on the electron-ion separation is stronger in spike-shaped QDs and therefore the energy dependencies in such structures exhibit a behavior similar to that in ring-like structures. The analysis of the curves of the density of electronic states also confirms this result. Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to this work. JSO created the analytic model with contributions from IM. RMG and GES performed the numerical calculations and wrote the manuscript. All authors discussed the results and implications and commented on the manuscript at all stages. All authors read and approved the final manuscript. Authors’ information JSO obtained his Ph.D. in 2004 at the Universidad Industrial de Santander, where IM was his advisor. His research interests include the theory of semiconductor nanostructures. JSO is the head of the research group ‘Condensed Matter Theory’ at the University of Magdalena. GES and RMG are master's degree and Ph.D. students, respectively, and teachers at the University of Magdalena. 1. Jacak L, Hawrylak P, Wójs A: Quantum Dots. Berlin: Springer; 1997. OpenURL 2. Leonard D, Pond K, Petroff PM: Critical layer thickness for self-assembled InAs islands on GaAs. Phys Rev B 1994, 50:11687-11692. Publisher Full Text OpenURL 3. Lorke A, Luyken RJ, Govorov AO, Kotthaus JP: Spectroscopy of nanoscopic semiconductor rings. Phys Rev Lett 2000, 84:2223-2226. PubMed Abstract | Publisher Full Text OpenURL 4. Granados D, García JM: In(Ga)As self-assembled quantum ring formation by molecular beam epitaxy. Appl Phys Lett 2003, 82:2401. Publisher Full Text OpenURL 5. Raz T, Ritter D, Bahir G: Formation of InAs self-assembled quantum rings on InP. Appl Phys Lett 2003, 82:1706. Publisher Full Text OpenURL 6. Movilla JL, Ballester A, Planelles J: Coupled donors in quantum dots: quantum size and dielectric mismatch effects. Phys Rev B 2009, 79:195319. OpenURL 7. Gutiérrez W, García LF, Mikhailov ID: Coupled donors in quantum ring in a threading magnetic field. Physica E 2010, 43:559. Publisher Full Text OpenURL 8. Calderón MJ, Koiller B: External field control of donor electron exchange at the Si/SiO2 interface. Phys Rev B 2007, 75:125311. OpenURL 9. Tsukanov AV: Single-qubit operations in the double-donor structure driven by optical and voltage pulses. Phys Rev B 2007, 76:035328. OpenURL 10. Openov LA: Resonant pulse operations on the buried donor charge qubits in semiconductors. Phys Rev B 2004, 70:233313. OpenURL 11. Koiller B, Hu X: Electric-field driven donor-based charge qubits in semiconductors. Phys Rev B 2006, 73:045319. OpenURL 12. Barrett SD, Milburn GJ: Measuring the decoherence rate in a semiconductor charge qubit. Phys Rev B 2003, 68:155307. OpenURL 13. Mikhailov ID, Marín JH, García LF: Off-axis donors in quasi-two-dimensional quantum dots with cylindrical symmetry. Phys Stat Sol (b) 2005, 242:1636. Publisher Full Text OpenURL
687f7d055262a0c7
Skip to content Quantum Refutations and Reproofs May 12, 2012 One of Gil Kalai’s conjectures refuted but refurbished Niels Henrik Abel is famous for proving the impossibility of solving the quintic equation by radicals, in 1823. Finding roots of polynomials had occupied mathematicians for centuries, but unsolvability had scant effort and few tools until the late 1700’s. Abel developed tools of algebra, supplied a step overlooked by Paolo Ruffini (whose voluminous work he did not know), and focused his proof into a mere six journal pages. Today our guest poster Gil Kalai leads us in congratulating Endre Szemerédi, who on May 22 will officially receive the 2012 prize named for Abel. He then revisits his “Conjecture C” from his first post in this series, in response to a draft paper by our other guest poster Aram Harrow with Steve Flammia. Szemerédi’s prize is great news for Discrete Mathematics and Theoretical Computer Science, areas for which he is best known, and this blog has featured his terrific work here and here. The award rivals the Nobel Peace Prize in funds and brings the same handshake from the King of Norway. Gil offers the analogy that Abel’s theorem showed why a particular old technology, namely solution by radicals, could not scale upward beyond the case of degree 4. The group-theoretic technology that superseded it, particularly as formulated by Évariste Galois, changed the face of mathematics. Indeed Abelian groups are at the heart of Peter Shor’s quantum algorithm. Not only did the work by Abel and Galois pre-date the proofs against trisecting angles, duplicating cubes, and squaring circles, it made them possible. Refutations and Revisions Gil’s analogy is not perfect, because quantum computing is hardly an “old” technology, and because currently there is no compelling new positive theory to supersede it. Working toward such a theory is difficult, and there are places where it might be tilting against the power stations of quantum mechanics itself. In this regard, Aram and Steve’s paper provides a concrete counter-example to a logical extension of Gil’s conjecture for the larger quantum theory, in a way that casts doubt on the original. The refutation and revision of conjectures is a big part of the process described by Imre Lakatos in his book Proofs and Refutations, which was previously discussed in this blog. Here, the conjectures are physics conjectures, related to technological capability, and the “proof” and “reproof” process refers to confronting formal mathematical models with (counter-)examples and various checks by observations of Nature. After two sections by Aram and Steve explaining their paper and its significance, Gil assesses the effect on his original Conjecture C and re-assesses its motivation. The latter is reinforced by a line of research begun in 1980 with the following question by Sir Anthony Leggett, who won the Nobel Prize in Physics in 2003: How far do experiments on the so called “macroscopic quantum systems” such as superfluids and superconductors test the hypothesis that the linear Schrödinger equation may be extrapolated to arbitrary complex systems? Leggett’s “disconnectivity measure” in his 1980 paper, “Macroscopic Quantum Systems and the Quantum Theory of Measurement,” was an early attempt to define rigorously a parameter that distinguishes complicated quantum states. (source, ref1, ref2) In this light, Gil formulates two revisions of his conjecture that stay true to his original intents while avoiding the refutation. Then I (Ken) review lively comments that continue to further the debate in previous posts in our series. Aram Harrow, with Steve Flammia Recall that Gil defined an entanglement measure {K(\rho)} (there called {ENT}) on a quantum state {\rho} in a particular standard manner, where {\rho} signifies a possibly-mixed state. The statement of Conjecture C then reads, There is a fixed constant {c}, possibly {c = 2}, such that for states {\rho_n} produced by feasible {n}-qubit quantum computers, \displaystyle K(\rho_n) = O(n^c). Here the technical meaning of “feasible” depends on which models of noisy quantum computers reflect the true state and capability of technology, and is hard for both sides to pin down. We can, however, still refute the conjecture by finding states {\rho} that by consensus ought to be feasible—or at least to which the barriers stated by Kalai do not apply—for which {K(\rho)} is large. Our point of attack is that there is nothing in the definition of {K(\rho)} or in the motivation expressed for the conjecture that requires {\rho} to be an {n}-fold aggregate of binary systems. Quantum systems that represent bits, such as up/down or left/right spin, are most commonly treated, but are not exclusive to Nature. One can equally well define basic ternary systems, or 4-fold or 5-fold or {d}-fold, not even mandating that {d} be prime. Ternary systems are called qutrits, while those for general {d} are called qudits. The definition of {n}-qudit mixed states {\rho} allows {K(\rho)} to be defined the same way, and we get the same conjecture statement. Call that Conjecture C’. As Gil agrees, our note shows unconditionally that Conjecture C’ is false, for any {d} as low as {d = 8}. Theorem 1 There exist intuitively feasible {n}-qudit states {\rho_n} on a 2-dimensional grid for which {K(\rho) = 2^{2n/3 - o(n)}}. It is important to note that with {d=8} we cannot simply declare that we have a system on {3n} qubits, because we cannot assume a decomposition of a qudit state via tensor products of qubit states. Indeed when the construction in our note is attempted with qubits, the resulting states {\rho'_n} have {K(\rho'_n) \sim n^2.} However, our construction speaks against both the ingredients and the purpose of the original Conjecture C. What the Conjecture is Driving At Conjectures of this kind, as Steve and I see it, are attempts at what Scott Aaronson calls a “Sure/Shor separator.” By his definition that would distinguish states we’ve definitely already seen how to produce from the sort of states one would require in any quantum computer achieving an exponential speedup over (believed) classical methods. It represents an admirable attempt to formulate QC skepticism in a rigorous and testable way. However, we believe that our counterexamples are significant not especially because they refute Conjecture C, but because they do so while side-stepping Gil’s main points about quantum error correction failing. More generally, we think that it’s telling that it’s so hard to come up with a sensible version of Conjecture C. In our view, this is because quantum computers harness phenomena, such as entanglement and interference, that are already ubiquitous. Nature makes them relatively hard to control, but it is also hard to focus sensibly on what about the control itself is difficult. The formulations of Conjecture C and related obstacles instead find themselves asserting the difficulty of creating rather than controlling. Of course they are trying to get at the difficulty of creating the kinds of states needed for controlling, but the formulations still wind up trying to block the creation of phenomena that “just come naturally.” In our view, the situation is similar to ones in classical computing. A modern data center exists in a state of matter radically unlike anything ever seen in pre-industrial times. But if you have to quantify this with a crude observable, then it’s hard to come up with anything that wasn’t already seen in much simpler technology, like light bulbs. Our note can be thought of as showing that Conjecture C refers to a correlation measure that is high not only for full-scale quantum computers, but even for the quantum equivalent of light bulbs—technology that is non-trivial, but by no means complex. Gil Again: Revising Conjecture C One of the difficult aspects of my project is to supply mathematical engines for the conjectures, which were initially expressed in informal English terms and with physical intuition. For example, in Conjecture 4 we need to define “highly entangled qubits” and “error-synchronization” formally. This crucial technical part of the project, which is the most time-consuming, witnessed much backtracking. It happened with initial formulations for Conjecture 4 that failed when extended from qubits to qudits, which was indeed a reason for me to dismiss them and look for a more robust one, and this has guided me with other conjectures. Aram and Steve’s example suffices to look for another formal way to express the idea behind Conjecture C. While rooted in quantum computer skepticism, Conjecture C expresses a common aim to find a dividing line between physical quantum states in the the pre- and post-universal quantum computers eras. When Aram’s grandchildren will ask him, “Grandpa, how was the world before quantum computers?” he could reply: “I hardly remember, but thanks to Gil we had some conjectures regarding the old days”—and the grandchildren will burst in laughter about the old days of difficult entanglements. Conjecture C expresses the idea that “complicated pure states” cannot be approached by noisy quantum computers. More specifically, the conjecture asserts that quantum states that can be realistically created by quantum computers are “{k}-local” where {k} is bounded (and perhaps is even quite small). But to formally define {k}-locality is a tricky business. (Joe Fitzsimons’ 2-locality suggestions in comments beginning here and extending a long way down are related to this issue.) We can be guided by the motivation stated on the first page of the paper by Anthony Leggett mentioned above, for his “disconnectivity measure” which intends to distinguish two kinds of quantum states: Familiar “macroscopic quantum phenomena” such as flux quantization and the Josephson effect [correspond to states having very low] disconnectivity, while the states important to a discussion of the quantum theory of measurement have a very high value of this property. Leggett has stayed active with this line of work in the past decade, and it may be informative to develop further the relation to his problems of quantum measurement and problems in quantum computation. In this general regard, let me discuss possible new mathematical engines for the censorship conjecture. Conjecture C For Codes Error-correcting codes are wonderful mathematical objects, and thinking about codes, is always great. Quantum error-correcting codes will either play a prominent role in building universal quantum computers or in explaining why universal quantum computers cannot be built, whichever comes first. The map I try to draw is especially clear for codes: Conjecture C for codes: For some (small) constant {c}, pure states representing quantum error correcting codes capable of correcting {c}-many errors cannot be feasibly approximated by noisy quantum computers. Like in the original version of Conjecture C our notion of approximation is based on qubit errors. Conjecture 1 in the original post asserts that for every quantum error-correcting code we can only achieve a cloud of states, rather than essentially a Dirac delta function, even if we use many qubits for encoding. The expected qubit errors of the noisy state compared to the intended state can still be a small constant. Conjecture C for codes asserts that when the code corrects many errors than this cloud will not even concentrate near a single code word. Here “many” may well be three or even two. Conjecture D for Depth Conjecture C for codes deals only with special types of quantum states. What can describe general pure states that cannot be approximated? Conjecture D: For some (small) constant {d}, pure states on {n} qubits that can be approximated by noisy quantum computers can be approximated by depth-{d} quantum circuits. Here we adopt the ordinary description of quantum circuits where in each round some gates on disjoint sets of one or two qubits are performed. Unlike the old Conjecture C which did not exclude cluster states, and thus could not serve as a Sure/Shor separator in Scott Aaronson’s strict sense, the new Conjecture D may well represent such a separator in the strict sense that it does not allow efficient factoring. It deviates from the direction of earlier versions of Conjecture C since it is based on computational theoretic terms. The new Conjecture D gives a poetic justice to bounded depth circuits. In classical computation, bounded-depth circuits of polynomial size give a mathematically fascinating yet pathetically weak computational class. In quantum computation this may be a viable borderline between reality and dream. In the Comments The comments section of the “Quantum Super-PAC” post has seen an extremely lively discussion, for which we profusely thank all those taking part. We regret that currently we can give only the barest enumeration of some highlights—we envision a later summary of what has been learned. Discussion of a possible refutation of Gil’s conjectures via 2-local properties started in earnest with this comment by Joe Fitzsimons. See Gil’s replies here and here, and further exchanges beginning next. John Sidles outlined a mathematical approach to the conjectures beginning here. Hal Swyers moved to clarify the physics involved in the discussions. Then John Preskill reviewed the goings-on, including 2-locality and the subject of Lindblad evolution as used by Gil and discussed extensively above, and continued here to head a new thread. Swyers picked up on questions about the size of controllable systems here and in a second part here. Gil outlined a reply recently here. Meanwhile, Gil rejoined a previous post’s discussion of the rate of error with a comment in the “Super-PAC” post here. Alexander Vlasov re-opened the question of whether the conjectures don’t already violate linearity. Sidles raised a concrete example related to earlier comments by Mikhail Katkov here. Then Gil related offline discussions with David Deutsch here. Gil has recently reviewed the debate on his own blog. He and Jim Blair also mentioned some new papers and articles beginning here. On the technological side, Steve Flammia noted on the Quantum Pontiff blog that ion-trap technology has taken a big leap upward in scale for processes that seem hard to simulate classically, though the processes would need to be controlled more to support universal quantum computation. Open Problems Propose a version for Conjecture C or D, or explain why such a conjecture is misguided to start with. 108 Comments leave one → 1. May 12, 2012 9:41 am I wish some computer scientist would write about Alexander Grothendieck. • John Sidles permalink May 12, 2012 7:18 pm Do I feel my leg being gently pulled? 🙂 • Serge permalink May 12, 2012 7:52 pm Ah yes, what a genius! I agree he’d have done marvels in computer science, even though his revolutionary achievements in algebra, geometry, topology, categories, philosophy, let alone his political involvements, are well enough for one lifetime. 🙂 • John Sidles permalink May 12, 2012 9:56 pm Serge, a newly published book well-worth reading is Elaine Riehm’s and Frances Hoffman’s Turbulent times in mathematics: the life of J. C. Fields and the history of the Fields medal (2012) in which we find the following quotation from David Hilbert, who at the 1928 ICM endeavored to restore the bonds of mathematical collegiality that had been shattered by the First World War: “Let us consider that we as mathematicians stand on the highest pinnacle of the cultivation of the exact sciences. We have no other choice than to assume this highest place, because all limits, especially national ones, are contrary to the nature of mathematics. It is a complete misunderstanding of our science to construct differences according to people and races, and the reasons for which this has been done are very shabby ones. Mathematics knows no races. … For mathematics, the whole cultural world is a single country. The sobering failure of Hilbert’s 1948 efforts was to become evident evident in the sad circumstances of Hilbert’s own death, and the desperate circumstances of Grothendieck’s own childhood, in the heart of the Third Reich. Are present-day circumstances any less sobering, than those of Hilbert’s era, and of Grothendieck’s era? Is the role of mathematics any less central? Appreciation and thanks are due to Elaine Riehm and Frances Hoffman, for writing a book that helps us to ponder these questions. • May 12, 2012 10:04 pm We have a scheme to talk about Grothendieck sometime in the summer. • John Sidles permalink May 13, 2012 11:30 am Hoorah! Ken, that’s exciting to look forward to! Serge’s post was correct (IMHO)  the successes and failures of Grothendieck’s wonderful enterprises are confounding, delightful, disturbing, and instructive for all STEM disciplines. • Serge permalink May 13, 2012 1:20 pm • May 13, 2012 1:37 pm 2. ramirez permalink May 12, 2012 10:43 am Quantum space is defined as the extra space when the integration of the matrix P=1 gets off the boundaries. as in P=1 square . the inverse functions on radicals have to comply with the equivalence. not as an Arch function were the real numbers change value on the negative sector. positive negative positive as logic gates for a nano circuit, are god for a logic circuit in a binary system. on the hyperbolic functions c=2 has an exponential on real numbers and a radical in prime numbers. this problem gave to Enrrico Fermi the ability to create a fermion. and Einstein created the solution using 1/2 of the sine 1/2 cosine to create a prime number that can have a solution as a radical and the quantum space could exist without problem. P=NP eliminating the bipolarity on the second factor. when reaches a speeds faster than the speed of light. the particle accelerator does not reach speeds faster than the light. using C=2 if C is the constant of the speed of light as seen in the equation of Apple computers robot Jeffrey 5000 creates a gravitational force. but does not reach the prime interface as described by Einstein were Gravity creates the inverse Matrix of p=1. is different than the Arch Matrix of P=1. 3. Rachel permalink May 12, 2012 12:25 pm This is a physics problem, and making conjectures with no basis in physics does not make sense. Are you really suggesting that Nature sees that you are trying to prepare a state encoded in a quantum error-correcting code and decides to stop you? I strongly disagree with calling these random, unsupported guesses “conjectures.” A conjecture should have at least some reason behind it, not just “gives poetic justice to bounded-depth circuits.” • May 12, 2012 4:46 pm Hi Rachel Good question! It is a natural question to ask if in nature we can witness approximately pure states manifesting long (or high-depth) quantum processes. (Let us even allow unlimited computation power to control the process.) After all, unless there is some fault-tolerant machinary it is hard to see how “long” quantum process can stay approximately pure. So bounded-depth processes is a natural proposal for the limit of quantum processes that do not manifest quantum fault-tolerance. 4. Serge permalink May 12, 2012 4:16 pm The impossibility of deciding whether P=NP is a direct consequence of Heisenberg’s uncertainty principle. • May 13, 2012 1:56 pm • May 13, 2012 1:57 pm why would anyone even study such a possibility? It is like saying I cannot count n! quickly because of Cauchy-Schwarz! • May 13, 2012 2:00 pm why would anyone even study such a possibility? It is like saying to know whether I cannot count n! quickly because of Cauchy-Schwarz is undecidable! • Serge permalink May 13, 2012 3:03 pm Not exactly. With P=NP you have a speed” – that of computer processes – and a “position” – the accuracy of the output result. I believe that the product of the probability for an algorithm to output accurate results by the probability for it to be efficient is lower than some fixed constant. I claim that both phenomenons – the one about quantum particles and the one about computer processes – are implied by some more general principle, though I can’t write out the details of a relationship between my principle and Heisenberg’s. • ramirez permalink May 13, 2012 7:33 pm Couchy effect as an absorption coefficient can be used on different conditions, but specially under a gravitational force. as the Schwarzchild equation measures the compression state of the space under gravity force. Here is when Einstein observes that light behaves as matter and is affected by gravity also.calculates the strength of the inertial force produced by the black holes. as a P=NP its assumed that NP can coexist in the same time space, and this condition presumes the existence of a great gravitational force, in the form of antimatter . here N would be the Anti-quark, or anti-proton and P the real number(X), that exists inside the schwarz ring of gravity. Couchy measures the intensity and speed of the absorption. However to consider that the quantum space does exist without that intense gravity would be uncertain as trying to decide the sex of a baby. here a radical have to be a proportional exponential to the times square. means we have to compress two dimensions, one for N and one for P.when C=Constant of the speed of Light and you take C=2 you create this uncertainty dilemma. its only when you consider C=Csquare when the time space paradox allows you N for Quantum space and a real space for P=1. see Schrodingers cat paradox. The Linear space with Reimman and Euler.The integration of two dimensions that allows you the existence of antimatter is still under test in the Linear particle accelerator that has fail to prove the existence of antimatter in the Higgs Boson concept The Tevatron cannot go faster than the speed of Light. • Serge permalink May 13, 2012 7:42 pm 5. John Sidles permalink May 12, 2012 7:14 pm On Shtetl Optimized, in response to a well-posed question from Ajit R. Jadhav, I described a toolkit for quantum dynamical simulation in which Conjecture C holds true, and yet the framework is sufficiently accurate for many (all?) practical quantum simulation purposes. A bibliography is included. The aggregate toolkit contains perhaps not even one original idea … still it is fun, and useful too, to appreciate how naturally many existing dynamical ideas mesh together. As for whether Nature simulates herself via this toolkit, who knows? The post does sketch an alternative-universe version of the Feynman Lectures that encompasses this eventuality. 6. May 13, 2012 2:11 pm Dear all, Greetings from Lund. I am here for the Crafoord days celebrating the Crafoord Prize being given to Jean Bourgain and Terry Tao. There is a symposium entitled “from chaos to harmony” celebrating the occasion with live video of the five lectures here . Here are some questions I am curious about regarding the topic of the post: 1) Can somebody explain Leggett’s parameter precisely? I remember that when I tried to understand it (naively perhaps) the parameter was large for certaib systems with large classical correlations. In any case, I would be happy to see a clear explanation what the parameter is. 2) What could be potential counterexamples to the suggestion that all natural (approximately) pure evolutions are of (uniformly) bounded depth. 3) Is the note by Aram and Steve gives a convincing evidence regarding Conjecture C in its original form. I am very thankful to Aram and Steve and overall I was quite convinced. But I am not entirely sure. This has two parts: a) Is the state W realistic? b) Is Conjecture C’ in the form refuted by them (and there is no dispute that their example refute Conjecture C’) the right extension to qudit-operated QC of the qubit version. • May 13, 2012 2:17 pm To add to 1), consider a quantum circuit that maps the all-|0> state to a state f. Is there an easy way—preferably gate-by-gate inductive—to compute Leggett’s D-measure of f? • May 13, 2012 10:27 pm a) Is the state W realistic? I would find it hard to think of a reason why it wouldn’t be. It’s essentially what you get when a single photon is absorbed by a gas cloud, or when you put a single photon through a defraction grating. • aramharrow permalink May 13, 2012 11:33 pm Joe, you probably know this stuff better than me, but for a gas cloud of N atoms, doesn’t the temperature have to scale like 1/log(N)? For photons that’s also true, but I think with a better prefactor. For example, modes of an X-ray probably have very little thermal noise in them. • May 13, 2012 11:49 pm Hi Aram, I was thinking of things like vapor cell quantum memories, which store the quantum state essentially as a w-state (see and have been demonstrated with reasonable fidelities. While certainly these are essentially constant sized devises, the constant is enormous. • aramharrow permalink May 13, 2012 11:51 pm Cool, thanks! • ramirez permalink May 15, 2012 12:17 pm The W- state as a receptor, it does absorb wave length frequencies and they are used as synthetic retina for digital cameras, they do absorb light and releases it, the main trick here is that the photon is turned into an electric current as in the solar panel arrays. so in this way the encoded information can be transcribed into zeros and ones.Bose-Einstein condensate obtains the harmonic state of some gases when they are under pressure and a near to absolute Zero K temperature.The solid state receptors for Wide Band Antenna does work on Microwaves capturing and releasing the information that is in the air. however the antenna position losses its grip to the sine of the wave so the new W-state receptors have multiple position on fractal arrays to correct this problem, as in your cellular. • aramharrow permalink May 13, 2012 11:47 pm For Leggett’s parameter, it’s crucial that the parameter “a” be taken to be <1/2, so that classical systems always have disconnectivity equal to 0. If you take it to be 1/3, then this says that D is the largest N such that for all subsets of N qubits and all divisions of those qubits into subsystems A, B, we have S(AB) <= (S(A) + S(B))/3, where S() is the von Neumann entropy. For evidence of depth, I think that the presence of iron in the Earth is pretty good evidence. The only natural process we know for creating it is stellar nucleosynthesis, which (a) takes a very long time, and (b) requires quantum mechanics (and (c), I had to look up the name of on wikipedia..). Because of (a) and (b), we have evidence of deep quantum processes. I can't prove this, since I can't rule out the possibility of a low-depth classical method of producing lots of iron. Rather I think the evidence for it is like the evidence for evolution, which is that it's the only plausible theory that is consistent with the data, and that the theory alone has predicted things that weren't originally used to derive the theory. Note that I didn't say anything about any states being pure. This is because purity is subjective, and I don't know of a way for our physical theories can meaningfully depend on this. This of course is a common theme in my (and Rachel's and Peter's and others) objections to Gil's conjectures, which is that they are phrased in ways that suggest Nature may have to know which states we prefer the system to be in. • Gil Kalai permalink May 16, 2012 8:14 am Dear Aram, this is a great comment with a lot of interesting things to think about. I am enthusiastic to see this clear explanation of Leggett’s parameter (and it would be nice to discuss this parameter), and the iron as evidence for depth is exciting and we should certainly discuss it. I suppose I do not understand the point about purity. What do you mean by purity being subjective, what is “this” that physical theory cannot depend on and what is the critique on my conjecture that is referred to. • May 16, 2012 3:32 pm Hi Aram, here is a remark regarding Leggett’s parameter as you described it. In the context of Conjecture 4 and the notion of “highly entangled state”, one idea was to base the notion on entanglement between partitions of the qubits into two parts. A counterexample for this idea but for qudits that came up in a discussion with Greg Kuperberg some years ago looks like this: let G be an expanded graph with valence 3 and with 2n vertices. Take 3n Bell pairs and arrange them into 2n qudits with d=8 according to the pattern of the graph G. Then at the d=8 qudit level, this state has a lot of excess entanglement for partitions into two parts. This is achieved simply by grouping the halves of the Bell pairs and not by doing any true quantum information processing. So maybe this is an example also of a very mundane state that represents high value of Leggett’s D-parameter. • June 22, 2012 1:16 am Regarding the expander counter-example: there’s something a little ambiguous about Leggett’s definition in that he states it for states that are symmetric under permutation, so that the reduced state of any N particles depends only on N. Probably the right pessimistic interpretation for non-symmetric states is that you want to choose the worst subset. So then it becomes “D := max N s.t. for any S with |S|=N there exists T\subset S s.t. H(rho_S) <= delta (H(rho_T) + H(rho_{S-T})) " But if that's the definition, then D will be very low for this expander construction you described And for most non-symmetric states I can think of. It doesn't feel like a very robust definition, though. If we replace |S|=N with |S|<=N then you get something different. Presumably also we should restrict N to be << system size. And certainly replacing "for any S" with "exists S" would be far too permissive; then just having a bunch of EPR pairs would count. • John Sidles permalink May 19, 2012 2:19 pm Gil asks: Can somebody explain Leggett’s parameter D precisely? I am struggling with this too. Hopefully the following LaTeX will be OK (apologies in advance if there are bugs). An explicit definition of D is given in Leggett’s article (eqs. 3.1-3), and three concrete examples are worked out (the first example is marred by a typographic error: S_2=1 should be S_1=S_2=0). The part of the definition that I struggle with is the pullback-and-partition of the entropy S onto subsystems. In particular, the post-pullback partition into (spatially separated? weakly coupled?) subsystems is problematic … and such partitions are problematic in classical thermodynamics too. Presciently, Leggett’s article authorizes us to adjust the definition of D as needed: We want D to be a measure of the subtlety of the correlations we need to measure to distinguish a linear superposition from a mixture. A variety of definitions will fulfil this role; for the purpose of the present paper (though quite possibly not more generally) the following seems to be adequate … We continue as follows, with a view toward eliminating problematic references to separation. Let a system S be simulated on a Hilbert space \mathcal{H} by unraveling an ensemble of Lindbladian dynamical trajectories, and let \rho_{\mathcal{H}} be the density matrix of the trajectory ensemble thus simulated. Pullback the Lindbladian equations of motion and dynamical forms onto a rank-r tensor product manifold \mathcal{K} and let \rho_{\mathcal{K}} be the density matrix of the trajectory ensemble thus simulated. By analogy to the Flammia/Harrow measure \Delta, define a rank-dependent Kalai-style FT separator measure \Delta'(r) to be a minimum over the choice of tensor bases of the trace-separation \Delta'(r) = \underset{\mathrm{bases\ of}\ \mathcal{K}}{\min}\ \Vert\rho_{\mathcal{H}}-\rho_{\mathcal{K}(r)}\Vert_{\mathrm{tr}} Then a Leggett-style rank-based variant of Kalai Conjecture C is Kalai-type Conjecture C’ For all physically realizable n-qubit trajectory ensembles, and for any fixed trace fidelity epsilon, there is a polynomial P(n) such that \Delta'(P(n)) \lt \epsilon This conjecture possesses the generic virtue of most tensor-rank conjectures: computational experiments are natural and (relatively) easy. It also has the generic deficiency of tensor-rank conjecture: it is not obvious (to me) how the conjecture might be rigorously proved. • John Sidles permalink May 19, 2012 2:24 pm Close … let’s try again … “Then a Leggett-style rank-based variant of Kalai Conjecture C is:” Kalai-type Conjecture C’ For all physically realizable n-qubit trajectories, and for any fixed trace fidelity \epsilon, there is a polynomial P(n) such that \Delta'(P(n)) \le \epsilon. Apologies for the \text{\LaTeX} glitches! 🙂 7. John Sidles permalink May 13, 2012 2:30 pm Aram and Steve refer in several places to “our note” … it would be helpful if a link were provided to this (otherwise mysterious) note. Or have I just overlooked a link? • May 13, 2012 2:33 pm “Note” and “paper” are synonymous—the post went thru a long edit cycle, and their April 16 ArXiv upload came in the middle of that. The link is at the top:, and actually until three days ago it was at a less-stable “arxaliv” link. • John Sidles permalink May 13, 2012 9:48 pm Thank you, Ken, for clarifying that! The Flammia/Harrow note “Counterexamples to Kalai’s Conjecture C” looks exceedingly interesting & employs several novel constructions … plausibly it will require comparably to digest as it took to conceive! 🙂 8. John Sidles permalink May 14, 2012 7:16 am As I slowly digest Steve and Aram’s (really excellent and enjoyable!) arXiv note “Counterexamples to Kalai’s Conjecture C” (arXiv:1204.3404v1]), one concern that arises is associated to the restriction “states ρ which have been efficiently prepared.” In designing an apparatus for efficient state preparation, it is natural to begin by generalizing the apparatus shown in Figure 1 (page 6) of Pironio et al’s much-cited “Random Numbers Certified by Bell’s Theorem” (arXiv:0911.3427v3). The natural generalization is conceptually simple: specify more ion cells, that generate more outgoing photons, such that state preparation is heralded by higher-order coincidence detection, as observed through unitary-transform interferometers having larger numbers of input/output channels. Visually speaking, just add more rows to Figure 1! AFAICT, in the large-n qubit limit this natural generalization is robust with respect to validation (that is, the state heralding is reliable, when we see it) but it is exponentially inefficient (the mean waiting time for state heralding is exponentially long in n). We might hope that this efficiency obstruction is purely technical, to be overcome (e.g.) with greater detection efficiency and lower-loss optical coupling between ions and detectors. But this limit is of course the limit of strong renormalization, and it is not obvious (to me) that the qubit physics remains intact following strong renormalization. These are hard questions. Over on Shtetl Optimized, where these same issues are being discussed, I had occasion to quote the following passage: “Non-physicists often have the mistaken idea that quantum mechanics is hard. Unfortunately, many physicists have done nothing to correct that idea. But in newer textbooks, courses, and survey articles, the truth is starting to come out: if you wish to understand the central ‘paradoxes’ of quantum mechanics, together with almost the entire body of research on quantum information and computing, then you do not need to know anything about wave-particle duality, ultraviolet catastrophes, Planck’s constant, atomic spectra, boson-fermion statistics, or even Schrödinger’s equation.” (from arXiv:quant-ph/0412143v2). Among practicing researchers, this comforting belief — which has the great merit of being immensely inspiring to beginning students — was perhaps more widely held in the 20th century than at present … because the immensely long, immensely difficult struggle to build working quantum computers has slowly and patiently been teaching us humility. That the Kalai/Flammia/Harrow Conjecture C includes the phrase “efficiently prepared” (as contrasted with “efficiently described” for example) is evidence that these lessons-learned are being assimilated and acted-upon. Surely there is a great deal more to be said regarding these issues and obstructions, and we can all hope that one outcome of this debate will be a jointly-written note from Aram and Steve and Gil, that surveys and summarizes (for 21st century students especially) the wonderfully interesting challenges and opportunities that are associated to this fine debate. • aramharrow permalink May 14, 2012 7:42 am Hi John, those are some good points, which I won’t fully address. But I do agree that experiments that wait for multiple coincidences are not scalable, and wouldn’t work for this kind of thought experiment. On the other hand, something like what Boris Blinov’s group is doing (using entangled photons to entangle distant ions) would, I believe, address this problem. Obviously doing such an experiment once isn’t easy, and doing it N times in parallel is only harder, but it’s almost certainly harder only by a linear factor. • John Sidles permalink May 14, 2012 8:41 am Aram, a reference would be very helpful. I had a professor who was fond of quoting Julian Schwinger to the effect that certain facts were “well known to those who knew them well.” In a similar vein, the arXiv note refers to “states whose physical plausibility is relatively uncontroversial” … and so it is natural and legitimate to wonder whether this opinion is shared by folks whose job it is to prepare these states. • aramharrow permalink May 14, 2012 9:05 am Some of this work is planned for the future, but this paper describes those future plans. I think it’s uncontroversial that the states are physically plausible, and that any fundamental obstacle to their creation would be extraordinarily surprising, like discovering new energy levels for the Hydrogen atom. But that is consistent with the fact that doing the experiment once is going to be very hard, and doing it N times will be something like N times as hard. • John Sidles permalink May 14, 2012 9:55 am Aram, I will look carefully at the link you provided. As we both appreciate, large-n entanglement obstructions typically are associated with the adverse scaling (1-\epsilon)^n \simeq e^{-\epsilon n} where \epsilon is some (finite) single-qudit single-operation error probability, and the proposed remediations of this adverse scaling typically are equivalent to some variant of quantum error correction … even in experiments whose intrinsic dynamics seemingly is non-computational. If there is any way to evade this generic mechanism, then I am eager to grasp it! • aramharrow permalink May 15, 2012 10:45 pm I guess one thing I should add is that our counterexamples construct states with high entanglement (according to Gil’s measure) *without* getting into the challenging parts of scalable FTQC. So our point is not a very deep statement, it’s simply that conjecture C is unrelated to the question of whether FTQC can work. As for your point about epsilon vs n, note that for photons, \epsilon goes like e^{-\hbar\omega / k_B T}, which is one sense constant, but in another sense, exponentially small, and in practice can really be very small. • aramharrow permalink May 16, 2012 10:28 pm One more thing along these lines. John S. points out that the tensor rank is low for the W state, meaning that they are relatively uninteresting from the perspective of quantum computing. Based on this, you could view our counter-example as saying that Gil’s entanglement measure counts too many things as entangled, including things that are so lightly entangled as to not provide computational advantage. Thus, it does not provide the quantitative Sure/Shor separator that he is looking for. 9. Serge permalink May 14, 2012 3:18 pm Let me explain the analogy a bit further. Heisenberg’s uncertainty principle is due to the fact that, in order to locate a particle, you must shed light on it. Unfortunately, light is made of photons and photons are also particles. Similarly, in order to settle that a program is correct you have to write a proof. Unfortunately, proofs are also programs and this results in the following fact: “The more you know about the correctness of a program, the less you become able to know about its complexity class, and vice versa.” This is, IMHO, the reason why all efficient “solutions” to SAT are not known to solve every instance. They only have an acceptable probability of correctness – they’re called heuristic algorithms. Conversely, the algorithms used in artificial intelligence are often proven mathematically correct… but very little is ever said about their efficiency. • Serge Ganachaud permalink May 14, 2012 7:05 pm I wouldn’t insist, but my preceding comment is a step towards P=NP being undecidable. 🙂 • Serge permalink May 26, 2012 1:39 pm In that regard, NP-completeness could be viewed as computer science’s counterpart of the quantum level. • Serge permalink May 26, 2012 8:20 pm … and the analogy goes further, as the macroscopic level is made of the quantum level just like NP problems are polynomially reducible to NP-complete problems. I really think that defining suitable distances or topologies on the sets of problems, of proofs and of programs would suffice to prove that P=NP can’t be proved. 10. May 15, 2012 1:26 am Aram and Steve’s state W and related states The parameter K[ρ]. Here is a reminder what K(ρ) is. Given a subset B of m qubits, consider the convex hull F[B] of all states that, for some k, factor into a tensor product of a state on some k of the qubits and a state on the other m-k qubits. When we strart with a state ψ on B we consider D(ψ,F[B]) the trace distance between ψ and F[B]. When we have a state ρ on n qubits we define K(ρ) as the sum over all subsets B of qubits, of D(ρ[B], F[B]). Here ρ[B] is the restriction of ρ to the Hilbert space describing the qubits in B. The states W. Next let me remind what are the states we talk about. We consider the state W_n=1\sqrt n |000\dots 1\rangle +1/\sqrt n |000\dots 01\rangle +\dots +1/\sqrt n |1000\dots 0\rangle. Let us also consider the more general state W_{n,k} which is the transposition of all vectors |\epsilon _1\epsilon_2\dots \epsilon_n \rangle where \epsilon_i are 0 or 1 and precisely k of them are 1. (So W_n=W_{n,1}.) Dycke state. In my paper I considered the state W_{2n,n} as a potential counterexample to Conjecture C. Again let me remind you that conjecture C asserts that for a realistic quantum states ρ, K(ρ) attains a small value (polynomial in n). I thought about W_{2n,n} as a simulation of 2n bosons each having a ground state |\,0\rangle and an excited state |\,1\rangle, such that each state has occupation number precisely n. While K(W_{2n,n}) is exponentially large in n a rather similar pure state , the tensor product of n copies of (1/\sqrt 2) (|0\rangle + |1\rangle ), is not entangled and for it K is n. So it is quite important to understand well what is the state which is experimentally created. What conjecture 1 says: Already Conjecture 1 is relevant to Aram and Steve’s W_n (and the more general W_{n,k}). The conjecture predicts that the noisy W_n states are mixture of different W_{n,k}, where k is concentrated around 1. It can be, say, the mixture, denoted by W_n[t] of W_{n,1} with probability 1-t and W_{n,o} with probability t. (Perhaps, with addition ordinary independent noise on top). So we can ask two questions: 1) Is the noisy $W_n$ states created in the laboratory in agreement with Conjecture 1? If we realize W_{n,k} by k photons the question is if the number of the photons itself is stable. Joe, when you refer to the state W_n that was constructed with reasonable fidelities, in the paper you have cited, what are the mixed state which are actually been created? 2) The second question is about a mathematical computation that extends what Aram and Steve did: What is the value of K(W_n[t])? Namely, if we have a noisy W_n of the type I described above what will be its value of K? is it still exponential in n? Leggett’s disconnectivity parameter. If somebody is willing to write down what is Leggett’s definition of his disconnectivity parameter and explain it this will make it easier to discuss Leggett’s parameter. The definition is short but I don’t understand it that well. • May 15, 2012 3:19 am Hi Gil, The paper I refered to was a survey paper, not any one experiment. However, in quantum memory type experiments they aren’t actually explicitly trying to generate w-states. Their ultimate goal is to basically absorb a photon for some period of time and then emit it. The physics of the situation is such that the state of the vapour is pretty close to a w-state, but that isn’t really what they care about (although it is maybe what we care about), it is simply the mechanism for their trick to work. The fidelity I was talking about was of the emited photon. This is only indirect evidence of the w-state, and measuring the state of the vapor itself seems likely to be beyond our current technological capabilities, but I believe it is reasonable evidence that we can create w-like states on a large scale. • May 15, 2012 5:16 pm Hi Aram Joe all, The motivation behind the parameter K(ρ) was indeed coming from error correcting codes that correct c errors. There, small subsets of qubits behave like product states but for larger sets (of size c+1 if I remember right ) you will get substantial contribution to the terms defining K. As Aram and Stave showed much more mundane states like W_n have expoenential value for the parameter K. I certainly agree that W does not look as expressing exotically strong entanglement. And I tend to agree that W-like states can be created. What we can think about is if such expected W-like states like those I described above also have an exponential value for K. Also I am not sure if we exausted listing all possible ways that the state W can be implemented. • May 15, 2012 11:31 pm Hi Gil, I was just trying to answer your question “is the W state realistic?”. I think we have pretty strong evidence that it is even at extremely large scales. Certainly we have not considered even a small fraction of the ways it can arise with relative ease, but I would have thought even the few examples considered thus far should be convincing enough on their own. • Gil Kalai permalink May 16, 2012 7:53 am Right, Joe, thanks. I am quite convinced about W being realistic. My follow up question was how realistic W-like states look like. In particular, how do they look as mixed state approximations of W (This may differ for different examples which is why I thought it will be useful to consider more examples.) This follow-up question is relevant to first being sure about the parameter K and also to Conjecture 1 which predicts how mixed state approximations of W look like. • aramharrow permalink May 16, 2012 8:04 am I think that what makes the photon-based W states realistic is that the per-qubit noise decreases exponentially with photon frequency, or more precisely, photon frequency divided by temperature. (And this noise should be on the order of 1 / #qubits.) So large, nearly-pure, W states should be feasible. If I’ve calculated right, then this ratio should be on the order of 100, when the photons are visible light, and the temperature is room temperature. This means that thermal noise per qubit would be e^{-100}. I guess this means that other sources of noise will be more relevant. Although photon loss isn’t much of a risk. I’m not sure what the dominant source of noise would be, really. • May 16, 2012 12:30 pm Aram: Once you bring detectors into the picture, you would need to worry about dark counts, which would become more and more important as the probability of there being a real photon in that mode decreases. This doesn’t alter the underlying state, but would decrease the fidelity of any reconstructed density matrix. • aramharrow permalink May 16, 2012 12:39 pm Sure, but for the purpose of the conjecture, we don’t need detectors; we just need to believe that at some point, the state existed. • ramirez permalink May 15, 2012 10:17 pm Leggett´s Dis-connectivity parameter (off line). begins when we are trying to create some programing strings on a programatic lang like de C plus , or C plus plus. it does not define the postions on a Matrix. we are considering that the zeros an ones are traveling at the speed of light. Eisenberg’s uncertainty says that in order to read a bit you have to write it first. there is nothing faster than the speed of light traveling in the empty space, once you go off boundaries of the domain of the Matrix P=1 the recording bits are unreachable. according to Bayes any statistical notacion or bit recorded on spaces (sideral) out of the reach of a logic gate disrupts the real time connectivity, lets say that you send an spatial probe out of the system and you need to comunicate in real time with it, you send the information of the programatic strings wherever they are, but the distance the probe is moving is a couple o billion light yeas away simply the information wouldn’t be there on time and several years later you receive some static final. what does it happen? you loose a grip of real time. Here Eistein talks about the light bending coefficient when the radius of the matrix integration domain goes off the limits of connectivity of communication of a logic gate. The generated inertial force at the end of the string will catch a gravitational force as the spin on the radius goes on. this is going to create an inverse value that is considered as antimatter modifying its structure . Einstein Observes the star lights of the super nova’s detonation(gamma rays) and sees that the exploding stars generate a light pulse that travels faster than the Limit of the light constant measured in an empty space. at this condition Eistein calls it Quanta. and states that Star Light travels in Quanta. so he does not takes C plus plus. as a solution for the dis-connectivity problem he divides P=1 between the sideral time and real time, obtaining B as a radical of the space-time P=1 equals P=-1 or inverse logic gate. as a radical of 1 he obtains K=0 because the bit traves in linear regression. this considerations came with the relativity theory where the speed of the light is relative to the conductor where it does travel. and to reconnect the logic gates in time space( two computers in sideral distances) needs to accelerate to C square times. through a gravitational compression that can liberate a propulsion force that breaks the speed of the Light. (Quanta is considered a worm hole), to create stacks for programing strings According to Bayes theorem requires to consider the variance and the deviation standard. here de Hue or deepness are a primordial problem due to the current flow in a quantum solid state receptor. the compression state for the Large hadron Collider that canot reach speeds faster than the Light. Tera-Electron-Volt cannot generate this antimatter needed for this kind of propulsion. 11. John Sidles permalink May 16, 2012 10:29 am Please let me say that I too regard W-states as being realistic (that is, experimentally feasible). For me, the salient feature of W-states is not their exponential-in-n K-value, but rather their polynomial-in-n tensor rank. Respecting tensor rank as a natural measure of quantum state feasibility, two recent survey articles (that IMHO are very interesting and well-written) are Cirac, Verstraete, and Murg “Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems” (arXiv:0907.2796v1) and also Cirac and Verstraete “Renormalization and tensor product states in spin chains and lattices” (arXiv:0910.1130v1). In the former we read: As it turns out, all physical states live on a tiny submanifold of Hilbert space. This opens up a very interesting perspective in the context of the description of quantum many-body systems, as there might exist an efficient parametrization of such a submanifold that would provide the natural language for describing those systems. and in the latter we read: The fact that [low rank] product states in some occasions may capture the physics of a many-body problem may look very surprising at first sight: if we choose a random state in the Hilbert space (say, according to the Haar measure) the overlap with a product state will be exponentially small with N. This apparent contradiction is resolved by the fact that the states that appear in Nature are not random states, but they have very peculiar forms. This is so because of the following reason … These considerations from Cirac, Verstraete, and Murg suggest that perhaps Gil Kalai’s K-measure might usefully be evolved into a rank-sensitive R-measure … the granular details of this modification are what I am presently thinking about. Please let me thank everyone for helping to sustain this wonderful dialog! 🙂 • John Sidles permalink May 16, 2012 2:01 pm As an addendum, it turns out that the above-referenced Cirac / Verstraete / Murg preprint arXiv:0907.2796v1 is substantially the same work as referenced as [3] of the Flammia/Harrow note “Counterexamples to Kalai’s Conjecture C” (arXiv:1204.3404v1). It is striking that one-and-the-same article serves simultaneously to: (1) inspire counterexamples to Gil Kalai’s specific conjectures, and (2) inspire confidence too that the overall thrust of these conjectures is plausibly correct. As usual, Feynman provides an aphorism that is à propos: “A great deal more is known than has been proved” In the present instance, this would correspond to “We (believe that we?) know that Gil’s thesis is correct (but in what form?) however we have not (as yet?) proved it.” • John Sidles permalink May 17, 2012 4:41 am Further respecting tensor rank, I tracked down the provenance of the Feynman quote (or misquote, as it turns out). From Feynman’s Nobel Lecture “The Development of the Space-Time View of Quantum Electrodynamics”: Today all physicists know from studying Einstein and Bohr, that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. … Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven. With regard to product states, we have works like J. M. Landsberg’s “Geometry and the complexity of matrix multiplication” (Bull. AMS, 2008) to remind us of how very much is known about these state-manifolds, and equally strikingly, how very much is not known. And so it seems (to me) that Gil’s conjectures are very much in accord with this honorable tradition of mathematics and physics, of seeking to state concretely and prove rigorously, an understanding for which there exists an impressive-but-imperfect body of evidence. 12. May 18, 2012 4:40 am Quantum FT separators, the parameter K(ρ) and the state W. 1) Conjecture C is meant to draw a line (of asymptotic nature) between states whose construction does not require quantum fault tolerance and states which require quantum fault tolerance. We will call such a proposed separation a quantum FT-seperator. 2) The border line was supposed to leave out quantum error correction codes that correct a number of errors that grows to infinity with the number of qubits. 3) The parameter K(ρ) was based on this idea since for an error correction code correcting c errors on n qubits its value is roughly n^c. However, as Aram and Steve showed a much more mundane state W has exponentially large value of K. This means that K does not capture what it was supposed to capture. 4) As mundane W is, it is interesting to examine how can it be implemented and what are the mixed state approximations that we can expect for W. (This is relevant also for my Conjecture 1.) In order to be sure that K is not appropriate to draw any reasonable line of the intended sort it will be useful to compute K for such W-like states, e.g. what I called W_n[t]. Aram and Steve’s qudit example and Leggett’s disconnectivity parameter 5) Aram and Steve’s qudit construction is based on the idea that for a state which can be created without quantum fault tolerance the parameter K(ρ) (extended to qudits) should remain low in every way we group qubits together into bounded size blocks. They exhibit a state which certainly can be prepared on 3n qubits so that when the qubits are grouped into sets of 3, the parameter K(ρ) for the qudits becomes exponential. 6) It is certainly a nice property for a parameter proposed as a quantum FT separator to remain low under such grouping. I am not sure that it is conceptually correct to make this a requirement and I will discuss this matter in a separate comment. 7) It is noted in this remark that a different qudit example in a similar spirit to Aram and Steve’s example may be used to exhibit a very mundane state with high value of Leggett’s disconnectivity parameter (in the way Aram described this parameter in this comment). Let’s discuss it! Bounded depth circuit as FT separators 8) The principal proposed FT separator described in the post is based on bounded depth computation. With the exception of a comment by Aram, we did not discuss this proposal so far. One counter argument raised by Aram is based on nature’s ability to create heavy atoms. This is a terrific idea. It can be interesting to discuss if the process leading to heavy atoms requires some sort of quantum FT, requires “long” (high-depth) evolutions, or perhaps even exhibits superior computational power. (I am skeptical regarding these possibilities.) 9) It will be interesting to describe experimental processes that may exhibit or require long quantum evolutions. 10) One of the nice things about bounded depth classic computation is that it leads to functions with very restricted properties. Bounded total influence (Hastad-Boppana); exponentially decaying Fourier coefficients (Linial-Mansour-Nisan) etc. Are there analogous results in the quantum case? 11) The bounded depth parameter satisfies the grouping requirement because we can regard qubits operations as qudit operations and replace each computer cycle on the qubit levels by several qudit cycles. • John Sidles permalink May 18, 2012 7:32 am Gil, thank you for this very fine summary. For me, the most natural candidate for an FT separator is the tensor rank, that is, “n-qubit states require FT iff their rank is exponential-in-n.” Perhaps the main objection to this separation is not that it is implausible, but rather that (with our present mathematical toolkit) it is so very difficult to prove rigorous theorems relating to tensor rank. Christopher Hillar and Lek-Heng Lim’s preprint “Most tensor problems are NP-hard” (arXiv:0911.1393v3 [cs.CC]) provides an engaging discussion of these issue, with the witty conclusion: “Bernd Sturmfels once made the remark to us that ‘All interesting problems are NP-hard.’ In light of this, we would like to view our article as evidence that most tensor problems are interesting.” From this perspective, perhaps progress in establishing (rigorous) Sure/Shor separations is destined to parallel progress in establishing (rigorous) complexity class separations. To put it another way, we would be similarly astounded at any of the following four announcements: • a rigorous resolution of \text{P}\overset{?}{=}\text{NP}, or • a rigorous proof of quantum computing infeasibility, or • a practical demonstration of a large-latex n$ quantum computation, or • experimental evidence that Nature’s state-manifold is non-Hilbert. And the mathematical reasons for our amazement would be similar in all four cases. • John Sidles permalink May 18, 2012 7:53 am Hmmm … the concluding four-item list was truncated by a LaTeX error. The intended list was: • a rigorous rank-based FT separation, or • experimental demonstration of a large-n quantum computer, or 13. May 18, 2012 8:11 am Dear John, Lets me just first make sure that we all understand the term tensor-rank in the same way. It is the minimum number of product pure states required to represent your pure state as a linear combination. (Am I right?) Since we talk abbout approximation we perhaps better replace “represent” by “approximate”. Anyway, tensor rank seems a natural thing to think about. (And I dont remember if I did not or just forgot.) I would worry that Aram and Steve’s qudit example may have large tensor rank in the qudits tensor structure. I prefer talking about FT-separators and not about Sure/Shor separator mainly to avoid computational complexity issues. The issue of Sure/Shor and FT separators is much simpler and clearer if we talk about noisy states and not about pure states. The simplest FT separator is a (protected) essentially noiseless qubit. It is an interesting problem (first to put on formal grounds and then to solve) if a single protected (essentially) noiseless qubit is a Sure/Shor separator. 14. John Sidles permalink May 18, 2012 9:02 am Yes, let’s affirm that “tensor rank” shall accord with wikipedia’s definition of tensor rank (which is the same as your post’s definition) and that “approximate” is a more precise description of what we want than “represent”. • John Sidles permalink May 18, 2012 11:39 am Hmmm … some subtleties associated to tensor rank, that are not mentioned in the Wikipedia article — in particular the distinction between the rank and the border rank of a tensor — are discussed in J. M. Landsberg’s “Geometry and the complexity of matrix multiplication” (Bull AMS, 2008). AFAICT the rank/border-rank distinction is not materially significant to rank-based FT separations. But who knows? I have myself encountered numerical instabilities that are associated to this distinction. The main point is that Landsberg’s definition of tensor rank, given as Definition 2.01 in his article, provides a rigorous entré into a mathematical literature that is vast, broad, and deep. To thoroughly grasp FT separations, it seems plausible (to me) that we will have to swim in Landsberg’s mathematical ocean … or at least wade in it.  🙂 15. ramirez permalink May 18, 2012 8:25 pm Tensor is the term used as “dynamic energy tension” inside the electron structure. there are two considerations about it, one is the electromagnetic field that considers the electron spin due to its structure, its divided in cycles, sine and cosine and its divided into bits and Bytes.4, 8, 16, 32,64,etc. logaritm progression, and the variant and covariant tensors that do not poses an electromagnetic field, but a gravitational tension. this inertial force creates a linear regression on the atom, not necessarily on the electron, this tensor can be found on the small particles as the Gluon, muon, etc. The term Quantic does not exist before Einstein comes out with the relativity theory of time and space, and writes a chapter making evident this difference between electromagnetism and gravity. The Q-bit o Quantum bit is the subatomic charge that can be recorded in a tight space as Hilbert`s calculations, however since the gravitacional compression came to the electronics field we can store more information in the same space, one picture just to fit in a 2 mega bites chip, now the same chip can be used for 2, 4, 8 giga bites, and so on. this compression rate allows to store more information, but to use the term Quantum bit, is needed to obtain the radical compression of 1=C constant of the speed of light so the exponential should be a quadratic equation. in this case we would be recording with radicals smaller than than the nano. P=1 as a matrix value has to be a cubic exponential. This Quantum bit o q.bit would be an antiquark or antiproton inside the programatic stack, the hertz wave runs at 2.4 giga hertz but this speed would not be fast enough to bridge a logic gate for distances where C the constant of the speed of the light is squared. we will get the Femto, Yocto atomic weight. usually this is the nuclear radiation value, this anti-q.bit its aut of the boundaries real numbers( Manifolds dilemma). the quantum bit is in what is called linear regression on time and space. The polinomial equations on integrals X,Y,Z. on a Matrix P=1 square. C=square create Linear regression strings on programming quantum bits that are defined By Plank, Einstein, with the term” Momenta” and Niels Bohr has to admit one antiproton in his atomic model. The standard deviation and variance creates the indexes that are considered “Jakobs Ladder equations” deviations on time and space. 16. May 20, 2012 2:46 am Another parameter which can be relevant for distinguishing “simple” and “complicated” quantum states is based on the expansion to multi-Pauli operators. Suppose that your quntum computer is supposed to perform the unitary operatior U, and let U=\sum \alpha_S S be the multi-Pauli expansion of U , namely the expansion in terms of tensor products of Pauli operators. Here S is a word of length n in the 4-letter alphabet I, X, Y, Z, and \alpha_S is a complex number. For a word S we denote by |S| the number of consonants in S. Define the Pauli influence of U by: I(U) = \sum \alpha^2_S|S|. We can consider I(U) as a complexity parameter. The advantage of using Pauli expansion is that it is much simpler compared to parameters like my K, and tensor-rank. (In some other cases it turned out that multi-Pauli expansion is the best way to express mathematically my conjectures.) • John Sidles permalink May 21, 2012 11:21 am Gil, supposing that for an n-spin system, a family of unitary transforms U(n) is given such that I(U(n)) is \mathcal{O}(P(n)) for some polynomial P(n). Then does it follow too that I(\log U(n)) also is \mathcal{O}(P'(n)) for some (different) polynomial P'(n)? Here the physical motivation is that \log U is a Hamiltonian that generates U. • May 21, 2012 2:53 pm Such complexity I(H) for “true quantum” gate H=(I+iX)/\sqrt{2} would be less than for “pseudo-classical (swap)” gate X. Is it O’K? PS. Naive technical remark, is Y considered as consonant? • May 22, 2012 9:33 am PS2: May be in expression for I(U) should be used |\alpha_S|^2 ? 17. ramirez permalink May 20, 2012 9:39 pm Pauli’s structure of analysis thrives on the electrons in the atomic level. The expansion factor of the atom when is stable and when is in expansion(explosion). Alfred Nobel obtains its fortune discovering the dinamite, the expansion factor when the atoms are at rest and in Pauli’s exclusion principle of the subatomic structure becomes a quantum step towards the principle of energy tensor. however the exponential reaction during its combustion (released energy) generates a wave length proportional to its compression state(solid quantum state). James clerk Maxwell separates the electricity from magnetism, what’s the deal, the electromagnetic field around an iron pigtal intensifies its force (K is the magnetic constant) this kind of compression does not have a quantum space (defined as an empty space in motion according to the relativity theory), the electron has a curly tensor as defined by Richard Feyman in Caltech in his book “The beat of a different hart “, during its expansion state releases a spin force(rip a curl), Einstein’s conjectures (Opinions) conduced him to split the atom and avoid the Fermion problem, because the tensor structure of the energy traveling in an empty space in motion had to carry a linear energy release effect and avoid the heating of the atom before it does releases the total amount of its energy,at the same time avoid the boundaries problems (manifolds) the total release of the energy in a chain reaction (obstructions on the combustion).over heating like in a nuclear reactor. The expansion factor is proportional at the inverse tensor (covariant), when this tension is on K=0 the inertial force forces the atom to travel backwards on time( this energy moving in an empty space in motion) creates a nonmagnetic tensor like antimatter. means that the nitroglycerin is condensed expansive fuel, as actinides, when they are in the presence of a detonator its energy release has a wave propulsion similar to the communications gate in your celular, reaches speeds almost like the speed of light, this is what is called a quantum bit o Q-bit. Pauli subatomic structure does not have quantic form. the nuclear energy release in a chain reaction inserts itself in the nucleus of the other atoms reproducing the same effect as splitting the atom due to the reason that is traveling faster than the speed of light (this is when is considered a quantic operator), Quantum computers use the same principle in the Hubble sideral observations, the Microchip that measures its operations in Gigahertz (speed), the Q-bits on memory stacks are compressed in giga bites, the programming string codes only create an assigned value of an operator, but if those operators are not defined on the memories arrays you can have a dis-functional programatic response. Quantum Physics are being used to simplifie multiple and complex operations. 18. May 21, 2012 1:31 am “…we conclude because A resembles B in one or more properties, that it does so in a certain other property.” John Stuart Mill, “System of Logic Ratiocinative and Inductive[1843]”, Chapter XX on analogies. Learning from analogies is a difficult matter, and often discussing analogies is not productive as it moves the discussion away from the main conceptual and technical issues. But it can also be interesting, and being as far as we are in this debate, while concentrating on the rather technical matters around Conjecture C, we can mention a few analogies. (Studying analogies was item 21 on my list of issues that were raised in our discussion.) 1) Digital computers Scott Aaronson: When people ask me why we don’t yet have quantum computers, my first response is to imagine someone asking Charles Babbage in the 1820s: “so, when are we going to get these scalable classical computers? by 1830? or maybe 1840?” In that case, we know that it took more than a century for the technology to catch up with the theory (and in particular, for the transistor to be invented). The main analogy of quantum computers is with digital computers, and of the quantum computer endeavor is with the digital computer endeavor. This is, of course, an excellent analogy. It may lead to some hidden assumptions that we need to work out. 2) Perpetual motion machines The earliest mention of this analogy (known to me) is in 2001 by Peter Shor (here): Nobody has yet found a fundamental physical principle that proves quantum computers can’t work (as the second law of thermodynamics proves that perpetual motion machines can’t work), and it’s not because smart people haven’t been looking for one. I was surprised that this provocative analogy is of some real relevance to some arguments raised in the debate. See e.g. this comment, and this one. 3) Heavier-than-air flight Chris Moore: “Syntactically, your conjecture seem to be a bit like this: ‘We know that the laws of hydrodynamics could, in principle, allow for heavier-than-air flight. However, turbulence is very complicated, unpredictable, and hard to control. Since heavier-than-air flight is highly implausible, we conjecture that in any realistic system, correlated turbulence conspires to reduce the lift of an airplane so that it cannot fly for long distances.’ Forgive me for poking fun, but doesn’t that conjecture have a similar flavor? “ This is also an interesting analogy. The obvious thing to be said is that perpetual motion machines and heavier-than-air flights represent scientific debates of the past that were already settled. 4) Mission to Mars Scott: Believing quantum mechanics but not accepting the possibility of QC is somewhat like believing Newtonian physics but not accepting the possibility of humans traveling to Mars. 5) Permanents/determinants 2-SAT; XOR-SAT Aram: If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing. This is a very nice analogy which gives a very good motivation and introduction to Aram’s first point. I also related to it in this comment. Of course, unlike the P=NP problem, or the question about solving equations with radicals, feasibility of universal QC is not a problem which can be decided by a mathematical proof. 6) Solving equations with radicals When it come to the content, I do not see much similarity between QC and solving polynomial equations. But there are two interesting points that this analogy does raise: 1) Can we work in parallel? Is it possible to divide (even unevenly) the effort and attention between two conflicting possibilities? It is quite possible that the answer is “no,” because of a strong chilling effect of uncertainty. (See e.g. this comment.) 2) The failure for the centuries-long human endeavor of finding a formula for solving general degree-5 equations with radicals is not just “a flaw.” It was not the case that the reason for this impossibility was a simple matter that mathematicians overlooked. The impossibility is implied by deep reasons and represents a direction that was not pursued. It required the development of a new theory over years with considerable effort. 7) The unit-cost model Leonid Levin (here): This development [RSA and other applications of one-way functions] was all the more remarkable as the very existence of one-way (i.e., easy to compute, infeasible to invert) functions remains unproven and subject to repeated assaults. The first came from Shamir himself, one of the inventors of the RSA system. He proved in [Inf.Process.Lett. 8(1) 1979] that factoring (on infeasibility of which RSA depends) can be done in polynomial number of arithmetic operations. This result uses a so-called “unit-cost model,” which charges one unit for each arithmetic operation, however long the operands. Squaring a number doubles its length, repeated squaring brings it quickly to cosmological sizes. Embedding a huge array of ordinary numbers into such a long one allows one arithmetic step to do much work, e.g., to check exponentially many factor candidates. The closed-minded cryptographers, however, were not convinced and this result brought a dismissal of the unit-cost model, not RSA. This is an interesting analogy. 8) Analog computers This is analogy that is often made. See, for example, these lecture notes by Boris Tsirelson, where Boris’s conclusion was that the analogy between quantum computers and both digital and analog computers are inadequate and quantum computers should be regarded as a new unchartered territory. I find what Boris wrote convincing. (I never understood though what is wrong with analog computers.) In Boris’s own words: A quantum computers are neither digital not analog: it is an accurate continuous device. Thus I do not agree with R. Landauer whose section 3 is entitled: Quantum parallelism: a return to the analog computer. We do not return, we enter an absolutely new world of accurately continuous devices. It has no classical counterparts. 9) Magic noise-cancelling earphones Here is an analogy of my own: We witness in the market various noise cancelling devices that reduces the noise up to 99% or so. Is it possible in principle to create computer-based noise cancelling earphones that will cancel essentially 100% of the noise? More precisely, the earphones will reduce the average noise level over a period of time T to O(1/n) times the original amount, where n is the number of computer cycles in T. • John Sidles permalink May 22, 2012 2:15 pm Another analogy is that we are struggling with a mismatch between “technology push” and “requirements pull”. At present the “requirements pull” is relatively weak — there isn’t much market-place demand for fast factoring engines, and as for quantum dynamical simulations, during the past 20 years the Moore-exponent of improvements in classical simulation capability has substantially outstripped the Moore-exponent of improvements in quantum simulation capability … and there is no obvious end it sight. As for the proof technology push, here too we have only barely begun to integrate existing quantum algebraic-informatic tools with differential-dynamic tools. As Vladimir Arnold expressed it: “Our brain has two halves: one half is responsible for the multiplication of polynomials and languages, and the other half is responsible for orientation of figures in space and all the things important in real life. Mathematics is geometry when you have to use both halves.” Conclusion: We stand in need of a version of Conjecture C that is designed so as to be simultaneously: (1) concretely responsive to the “requirements pull” of the 21st century, and (2) creatively amenable to an Arnold-style “technology push.” • May 22, 2012 2:22 pm Another very good analogy in my view is with Bose-Eisntein condensation. An idea that was theoretically proposed in 1924-25 and was first realized experimentally in 1995 after attempts to do so from the mid fifties. This is a great “role model” for the QC endeavor and it is also related to various technical issues related to our discussion. (Also some of the heroes of the BE story are now part of the QC efforts.) • March 20, 2014 2:42 am Another interesting analogy with alchemy and the goal of transmuting gold into led was raised e.g. by Scott Aaronson in this discussion over Shtetl Optimized. What is interesting here is that the principle for why led cannot be turned into gold given by atomic theory and modern chemistry were of course of huge importance in  science, and yet, one could say that now with subsequent further understanding one can argue that it is actually possible “in principle” (but we no longer care) to turn led into gold. (You can even say that understanding the principle for why it is impossible was crucial for understanding later on the principles for why it is possible.)  See here for a related remark by Dick Lipton over my blog also referring to perpetual motion machine. • March 20, 2014 6:51 am History repeats itself. Not even sure that these three problems – lead transmuting into gold, P=NP, large-scale quantum computing – are theoretically so distinct from each other… 19. May 29, 2012 3:05 pm I am looking at Conjecture C for codes I cannot help but think about separability of pure states, and since we are talking qudits now, it is worth noting that it is possible to place upper and lower bounds on separable states around maximally mixed states. Click to access 0001075v2.pdf It is also worth reviewing Click to access 9605038v2.pdf If we think of noisy quantum systems as non-separable where system and environment are entangled, then the question seems to be whether we can identify some separable pure subsystem of the noisy system of some size with a measure c of the number of errors the subsystem can correct. My thinking is that we really can’t answer this question without thinking dynamically, e.g. without thinking about the time dependence of the system. If we are thinking in terms of computing, we have to place a time envelope around the beginning and end time of the computation. So in this sense, one can think about there being a bubble in the mixed system that has sufficient life to complete some sort of operation. This make one want to use constant c in the context of a measure of temporal pure states that follow a decay function like EXP(-ct) where the upper bound of c is limited largely by the remaining entropy growth in the larger noisy system. Quantum teleportation gets around no cloning by destroying one copy and creating another with a spacetime gap between the two copies. So it doesn’t seem counterintuitive to suggest that we can introduce similar temporal restrictions on the code. So if we dump any notion of eternally pure states, and begin asking questions about the scalability of more temporal pure states, I think the size of the separable pure states will be largely dictated by the size of the larger noisy system and where that system is in its evolution with respect to some observer. 20. May 31, 2012 12:57 pm A piece of news related to the debate: Here at the Hebrew University of Jerusalem the new Quantum Information Science Center had its kick-off workshop, Yesterday, May 30, and I gave a lecture on the debate with Aram. Here are the slides of the lecture. It covered my initial post and Aram’s three posts but did not go into the rejoinders and the discussion. There were several interesting comments related to the discussion which I will try to share later. As quantum information is an interdisciplinary area in the true sense of the word and also in the good sense of the word, an area that involves several cutting edge scientific topics, it is only natural that HU’s authorities enthusiastically endorsed the initiative to establish this new center. The entire workshop was very interesting, Dorit Aharonov gave a beautiful talk claiming that a new scientific paradigm of verification is needed to check quantum mechanics beyond BPP. This talk is quite relevant to a discussion we had here on the post: “Is this a quantum computer“. The other talks were also very interesting. 21. ramirez permalink June 4, 2012 11:44 am I’ve seen this dialog going from Polynomials to Black holes and seems to me that you are looking the “God”el’s refers in modern physics, the Dialog with “Aram” aic on the signature of the glyph-encrypted-black door.that talks about gods paradise is very similar to the quantum time-space on the “Tora” talks about creation “one empty space where he created the materials things”. Einstein said the same thing ” an empty space but he said (in motion)”.the Godels polynomial on the Quintic equation on a bi-dimensional where you take one ordinal line X to an exponential 5 “from one to five there is a gap to reach that place (here there is motion) ” this was not solved in that time. this gap created the Ana “frank”-incense room where it was an annex to the house where she was hiding. this gap factor is the same principle of the Quintic “mistere” of the Golden Shrine and the small temple behind it in Israel. This equation was discovered by the German Officers upon its denunciation. the separation of the origin of X to its radical, created a linear gap called hue, or deepness.So Einstein had to go on a Tri-dimensional ordinal sketch and Use the principle of “gravity as two forces that attract and repel each other” where the radical is a compression state (Quantum state). some get confused on this assumption saying gravity is = 2, where in reality is one in ground zero. so he decides to split the exponential function of one. and have 1/2 of the sine . 1/2 cosine =1. On the Quintic equation the observer sees the variable X moving from left to right or right to left. so he changes the position to Z variable. and he sees that the light bends upon the inertial force when is moving from X zero to X5 what you are seeing is that the point of origin moved because the place you are standing is moving also.(Galileo) paradox. so Einstein made some calculations and he discovered those two forces that counteract and he uses the equation on 8Times the radius or the speed of light, creating the “Octil” parameter. (Octli) so this inertial force would create the gravity needed to create a Quantum State of particles in Motion in an empty space.(Bose-Einstein Principle), this Gravity Shield (SchwarzChild ring of gravity equation), influenced by the spinning of the particles on a distant Polynomial would create a gravity line on X zero to X-1= radical.The first gravity force on “Z” when the integration of the variables on a displacement towards the point of 8Times the radius at the speed of light, has had surpassed the boundaries of the real numbers (manifolds).The numeric perception becomes not real and 1/2 can be considered different than the original values.(values of perception Jean Piaget). The quantum space is considered an extra dimension when the integration values(samples) are far more away of the distance of the speed of light.The uncertainty of finding the same spot in the same time space, in another dimension has brought the idea of Quantum Bits to make Tera Hertz fast microchips to make recording quantum-bits. however one bit second in the fire might seem like a year, or one billion year with Ruth might feel like a second.This are the principles of the Aramaic encryption lang.later on the Hittite Lang Shows some modification on time space, from the present to the future. Any space that you can see and perceive in your eyes has an arch function inside your mind. The phrase ” Victory is for Her whom wins with her sight and ankles” was used during the Roman Empire as a symbol of power. Who she is?. The Tevatron is trying to generate this gravitational force to accelerate the hydrogen in the fuel cell. and use water as fuel. The water as Fuel has to have this pressure, The fuel pressure sensor has to indicate the fission point where the hydrogen atom jumps from one dimension to another generating friction and temperature within himself.its called the Mikamoka antimatter bit. The Borgia Codex shows some codifications that talk about this empty space in motion but is similar to the old Babylonian cuneiform script of Mount Sinai. Shalom. 22. June 4, 2012 1:02 pm One interesting issue that was raised by Nir Davidson in our quantum information center kick-off workshop is the “paradox” regarding chaotic behavior in quantum and in classic systems. In a classic chaotic system two nearby initial states may be carried far apart by the dynamics. In fact, their location may become statistically independent (in a certain well defined sense). In “contrast” for two nearby states in a quantum evolution their distance (trace-distance) remains fixed along the evolution. Nir described a formulation by Asher Peres for the “paradox” as well as how to solve it, and some related experimental work. This issue is relevant also to classical and quantum computers. If we corrupt a single bit in a digital computer then as the computation proceeds we can assume that this error will infect more and more bits so that the entire computer’s memory will be corrupted. In “contrast” if we let a quantum noise effect a single qubit and continue the quantum evolution without noise, then the trace distance between the intended state and the noisy state remains the same. What can explain this difference? The answer (I think) is quite simple. It has to do with the distinction between measuring noise in terms of trace-distance and measuring it in terms of qubit-errors. When you corrupt one qubit and let the error propagate in a complicated noiseless quantum computation the trace distance between the intended and noisy states will be fixed but the number of qubit-errors will grow with the computation, so just like in the classical computer case the noise will effect the entire computer memory. This is related to the fact that the main harm in correlated errors is that the error-rate itself scales up. 23. Serge permalink June 6, 2012 5:13 am Had computer science been a business of engineers and physicists right from its beginnings, I think that greater emphasis would have been put on processes rather than on programs. Processes are physical objects whereas programs are just mathematical ones – and processes are everywhere in Nature. For example, the fact that it’s much more difficult to factor a large compound number than it is to multiply two large primes is somewhat reminiscent of the nuclear force that glues the protons together inside atoms. Breaking a nucleus apart requires a lot of energy as well. When the unsolved problems of complexity theory are considered more systematically with a physicist’s eye, maybe new laws for the physics of computing will be discovered instead of new axioms and proofs about algorithms. • Serge permalink June 6, 2012 10:07 am To put it differently: trying to guess the behavior of a process by means of its program is like trying to guess somebody’s life by means of their DNA code. Processes are executed by physical devices which are themselves subject to the laws of physics. That doesn’t answer the PvsNP question – which I believe is undecidable. But it might explain why the world seems to behave as though P!=NP. • June 6, 2012 10:31 am Hi Serge, regarding the P=NP problems and your beliefs about it. The possibility that the question is undecidable was raised and there is some related research agenda. Unfortunately, proving definite results in this direction appears to get “stucked” even a bit earlier than proving definite results about computational complexity hardness. (If you want to check your reasonings regarding P=NP being undecidable one standard things to do it to try to see the distinction with problems like 2-SAT that are known to be feasible.) You mainly raise two other issues which seem interesting. The first is about our inability to predict the evolution of a computer program (described, say by the DNA code) when the evolution depends on unpredictable stochastic inputs. The second is about our inability to predict the evolution of a computer program (again a DNA code is an example) when we do not know precisely what the program is. (Also, the analogy between factoring and breaking a nucleus to parts is cute, but its is not clear how useful it can be.) The distinction between (physics and engineering) processes and (mathematical) programs is not clear. • Serge permalink June 6, 2012 11:48 am Hi Gil, thank you very much for your interesting answer. A clear distinction between programs and processes is useful in operating systems, a process being a specific execution of a program. One program leads to infinitely many possible executions of it. When mathematicians speak of a program, I think they also mean all its potential executions. Regarding PvsNP, there might exist a polynomial algorithm for SAT but executing it would go counter physical limits, such as a program too big to fit into memory for example. Or maybe our brains just couldn’t understand it and therefore nor even design it. In addition to the unpredictability of the behavior of programs due to unpredictable stochastic inputs or to unknown code, in some cases that behavior could be undecidable itself. I’m thinking of the algorithm that Ken commented on in “The Traveling Salesman’s Power”, saying there’s an already-known algorithm A accepting TSP such that if P=NP then A runs in polynomial time. 24. June 10, 2012 7:44 am John Preskill’s recent paper Quantum computing and the entanglement frontier touches on many issues raised in our debate. Very much recommended! • John Sidles permalink June 10, 2012 9:30 am Gil, please let me commend too this same Preskill essay. In it we read the following passage thought-provoking passage (p. 5): “A quantum computer simulating evolution … might not be easy to check with a classical computer; instead one quantum computer could be checked by another, or by doing an experiment (which is almost the same thing).” Adopting Preskill’s language to express the intuition that motivates Kalai Conjecture C (as I read it) leads us to the notion classical computers suffice to verifiably reproduce any-and-all simulations of quantum computers, insofar as those simulations apply to feasible physical experiments. And here the notion feasible physical experiment is to be taken to mean concretely, any-and-all physical system whose Hamiltonian / Lindbladian generators are stationary. In the preceding, the stipulation stationary is chosen deliberately, with a view toward crafting a concrete presentation of Conjecture C that affords ample plausible scope for near-term advances in practical simulation, without definitively excluding a longer-term role for quantum computational simulation. As a colleague of mine from Brooklyn was fond of saying, such a conjecture would be “better than ‘purrfect’, it would be ‘poifect’!”   🙂 • June 12, 2012 7:11 am Dear John, I have similar sentiments regarding the role and scope of Conjecture C. The draft of my post had a long obituary of Conjecture C (in the form originally made), starting with: “Conjecture C, while rooted in quantum computers skepticism, was a uniter and not a divider! It expressed our united aim to find a dividing line between the pre- and post- universal quantum computer eras.” Following Ken’s mathematical-formulations-as-cars’-engines metaphor, the following picture of me and Conjecture C was proposed • John Sidles permalink June 12, 2012 8:27 am LOL … Gil, perhaps Conjecture C may yet be reborn as a phoenix arising from the ashes! 🙂 • June 12, 2012 9:03 am Indeed, we have good reasons to give up on the parameter K(ρ), but we did raise some appealing alternative parameters. In particular, the conjecture that the depth of quantum processes is essentially bounded is interesting both from the conceptual and technical points of view. (The idea that the emergence of iron is a counter-example is terrific, but I do not think that it is correct…) 25. June 10, 2012 11:03 am As I am reading the the Preskill paper, my thoughts are wondering to questions of examples of brute force quantum computers. The idea is this, think of LHC. What is it actually doing? It is trying to identify particles predicted by various models of particle physics, and it is also verifying production cross sections of those particles. So in some sense, we have models that can make predictions that are in some way computable using a classical computer, and we are building a machine that can verify that those models are accurate. So what is the LHC? Is it a machine or a Brute Force Quantum Computer? No one is questioning that by accelerating particles and smashing them together we are generating new particles that follow some sort of function, however neither is any one questioning that what the LHC is simulating is an earlier state of the universe (and that might be a good question to ask). Another more accessible potential example is found in the study of fluid dynamics. Although we have fairly good classical formulas for modeling fluid flow in several situations, The modeling of complicated turbulent systems is extraordinarily difficult, and in many cases scale models must be produced in order to measure the “real” fluid flow of the system. Again, if we accept a quantum existence, what have we actually built with our model? We have resorted to a type of brute force method in order to solve a real world computational problem. As I think further about the question of QECC, I can’t help but think of the similarity between the difficulty of developing QECC and the difficulty in building stable fusion reactors. In a fusion reactor the goal is to build a stable, long lasting state of matter, invariably we can see that state as a quantum state, and the problem is similar. How do we keep the state stable so that “noise” from the environment doesn’t collapse the state? Once again, we are looking for a brute force method to solving an otherwise computational problem. Freeman Dyson recently published a book review where he compared string cosmologists to natural philosophers and other “creative” thinkers. However, what he failed to recognize is that questions being asked in those explorations do have intersections to real questions in quantum computing, such as the relationship between axions and anyons, as highlighted by Wilczek [2]. This brings me some of the more current question regarding the debate surrounding SUSY and theory that rely upon its existence. I look at the recent Straub paper [3] and see a graph with the SM as a point in vastly larger parameter space. Although by design, all the other potential models contain the SM as a shared common point, I can’t help by think about the situation coming from the other direction an looking at all the potential models that have the SM as a common point. Although I am not a subscriber to any notion of a multiverse as envisioned by sci-fi and pop sci writers, I am interested in this idea of other stable solutions, or perturbtions of our particular stable solution. Preskill does and excellent job of highlighting the question of what can’t be simulated on a quantum computer. We can’t give mass to a simulation in a quantum computer, however we know that there are several solutions out there that could be explored that do not require mass, and I think those are something worth exploring. 26. June 12, 2012 2:14 am The universe: is it noisy? Is it a quantum computer? why not two non-interacting quantum computers? The idea of the entire universe as a huge quantum computer was mentioned in several comments (and is an item on our long agenda). Also, the universe being described by a pure quantum evolution was mentioned and was related to Aram’s second thought experiment. It feels rather uncomfortable to talk about the entire universe, or to draw conclusions from it, but let me try to make some comments. 1) The claim that the entire universe runs a pure evolution seems reasonable but not particularly useful. (There are theories suggesting otherwise which are outside of quantum mechanics.) 2) The claim that the entire universe is a huge (noiseless) quantum computer which computes its own evolution is also made quite often. Again, it is not clear how useful this point of view is. And I am not sufficiently familiar with the literature on this. The universe as a huge noiseless quantum computer can be regarded as an argument against the claim that quantum computers are inherently noisy. 3) As we noted already, quantum computers are based on local operations and therefore the states that can be reached by quantum computers are tiny part of all quantum states. For example, a state described by a generic unitary operators is unfeasible. (In our off-line discussions we raised the question if such non-local states appear in nature.) 4) An appealing possibility (in my view) for our universe is that of two (or several) non-interacting (or, more precisely, extremely weakly interacting) quantum computers. We can have on the same Hilbert space two different independent tensor product structures so that every state is a superposition of two states, each described by one of two quantum computers. In this case, states achievable by one quantum computer will be nearly orthogonal to states achieved by the other. (This possibility does not rely on the hypothesis of no quantum error-correction, although it will be “easier” for two quantum computers not to be able to interact when there is no quantum-error correction around.) 5) The idea of the universe as a quantum computer which runs quantum error-correction is used in the paper Black holes as mirrors: quantum information in random subsystems by Hayden and Preskill. For what I understand, in this paper, certain quantum states in a black hole are required to behave like generic unitary states, and since such states are infeasible, states with similar properties arising from quantum error-correction are proposed instead. It will be interesting to examine if Hayden-Preskill’s idea can work with quantum error-correction being replaced by a two non-interacting quantum computers solution. 27. ramirez permalink June 12, 2012 10:51 am Usually the chinese people writes “peoples” as plural when it is already “people” plural. the same mistake was done in Mao Tze Doung’s biography. we are acostumed to take somebody’s else mistakes as truthful.”Weylan” means labor camp woman. Mao’s mother, and Bolchevique’s holy Icon that represents the mother’s nation of the of the truthful patriots. Charles Marx was a German Jew that wrote “Das Kapital”, Einstein was a German-Jew also. both theories shocked the world with conjectures on Human quality recognition and equal distribution of the income. while the occidental countries constructed their kingdoms based on Slavery and human degradation arguing that they are doing good to humanity. Why two supercomputers can’t be enabled to work together?. their programers keep the security codes on the so called star wars. where the code couldn’t be cracked, hijacked, or erased. in order to deviate their commanding source and get rid of them in case of a confrontation. What is the problem with the quintic equation being solve by radicals? that we do not have an exact number on the square of 2 or 1. all the operators are built in hertzian operations. how fast an electric current travels through a conductor of logic gates flipping them on zeros or ones. the Quantum bit here has been recorded in a different wave lengt not in a different code source, this wave lengt is exclusive of the Pentagon or the Kremlin to operate their military satellites. its something like the chess board, it dos have to parts that are interacting with each other to find the ponderation of the code encrypted in each memory stack. however the quantum bit presents a conflict. where the antimatter is present as an antiquark in a wave lengt.Einstein’s equation E=MC square caused international Mockery and Histeria between the Mathematicians and Physicists, Why? the Godel’s inability to solve the quintic equation was solved upon a logic aberration. C= constant of the speed of light its the maximum speed of the light in an empty space, so how are you going to accelerate faster than the speed of light to get C square?. somehow the universe is noisy because they found sounds of exploding stars and this shock waves travel faster than the speed of light. its what is called Quanta something like ether, or antimatter (The Micamocka chocolate chip), its the same antimatter quantum bit that the Tevatron is looking for in the Large hadron Collider, and is obtained in a Higgs equation through a massive collision of particles where the expansion wave has to be similar to a super nova star, however they do not have the expected results. this event should create a time space distortion where two or more atoms are trying to occupy the same dimensional time space, this is called atomic fission and is found in radioactive materials that eat the surrounding material (Chernobyl).there is an angle deviation in the equations (Bishop) that acts as a counter weight in the atom spin when reaches C square. that factor is what is called gravitational spin. The negroe playing chess with the Rabie is symbolic of the arch of the alliance, but that does not make them geniuses like you say. Bobby Fisher, Karpov, Kasparov and many others work on an equilibrium equation where any step of a horse changes algorithmically all the equation. Mans visual field is 20-20 while the Horse is Greater 30-30 so this difference gives you a linear regression. Check once you are the king your place its the “Ara” aramaic. That becomes a black hole to the gravitational field when you are out of boundaries. Quantum bit (Antiquark) that is present before the integration of the mass hertzian wave. Eistein used 8Times the radius of the speed of a light emitter to create the gravity field where you can encrypt any antimatter code. Megabucks trick,National Lotto, and other crap games. The tower J is the Jocker’s Club, Who’s club is the Tower B?. 28. ramirez permalink June 18, 2012 7:15 pm Heisenberg’s uncertainty is about How sure are you about hitting the nucleus of an atom in a chain reaction if you cannot comeback to the same place you left when you went up to a quintic polinomial, when it does involve exponentials on Csquare.The radicals are afected inversely. 29. June 11, 2013 12:59 am I gave a talk at the HUJI CS theory seminar on matters related to my conjectures and the debate and there were several interesting comments by Dorit Aharonov, Michael Ben-Or, Nadav Katz , and Steve Wiesner. Dorit suggested that experimental cat states with huge number of qubits are  counterexamples for the conjecture on bounded depth computation. This is a Good point!! I should certainly look at it. 30. January 23, 2014 9:13 am One thing I never explained is why I considered Aram and Steve’s example as a counterexample to my conjecture C. The setting of conjecture C was to find limitations for states achieved by noisy quantum computers with realistic noise models. The prior assumption you need to make is that the noise on gate is of arbitrary nature. (And, in fact, for my full set of conjecture you need to assume that information leaks on gated qubits/qudits are positively correlated). Aram and Steve had two examples. The first is based on qudits. This is an interesting example, and certainly my conjecture C should extend to qudits. But in Aram and Steve’s example the noise on gates is not of general nature but rather of a very structural nature. So this does not apply to the right extension of Conjecture C to qudits, although it does impose an interesting condition on “censorship conjectures.” The second qubit example is more convincing. (Ironically it is quite similar to an example I proposed myself in 2007.) A&F proposed a pure state which seems easy to approximate where my entropic parameter is exponential. What happens for mixed states which represent realistic approximation for this state? If the parameter is exponential for them this is a counterexample for my example. If it is not, it shows that the entropic parameter I defined is seriously flawed. (It will be interesting to know which possibility is correct, but in both cases I regarded my original entropy-based parameter as inappropriate.) 1. Sometimes They Come Back | Are You Shura? 2. The Quantum Fault-Tolerance Debate Updates | Combinatorics and more 3. RT.COM -> Это происходит, на каком языке? -> WHAT R U –PEOPLES DOING?זה קורה ב איזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה– BIBI ?Dit geb 4. Can You Hear the Shape of a Quantum Computer? « Gödel’s Lost Letter and P=NP 5. Giants Known For Small Things « Gödel’s Lost Letter and P=NP 6. Quantum Repetition « Gödel’s Lost Letter and P=NP 7. Lucky 13 paper dance! | The Quantum Pontiff 8. Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP 9. The Quantum Debate is Over! (and other Updates) | Combinatorics and more 10. My Quantum Debate with Aram III | Combinatorics and more 11. Happy 100th Birthday, Paul Erdős | Gödel's Lost Letter and P=NP Leave a Reply to John Sidles Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
43042281734a8c27
Tech Facts Four facts about quantum physics, which do not teach professor Quantum phyics We often talk about quantum physics, and many of us, if not all, at one time, strain your brain, trying to figure out what’s going on. But what may be even more bizarre than the strange and infinite integrals with complex mathematics that has developed over decades? Facts about quantum physics 1. Opening of the Schrödinger equation The differential equation of Schrödinger is well known to young chemists and physicists around the world. Briefly, the equation to describe the motion of electrons around the nucleus of a revolutionary way, and radically changed the look of the scientific community on the model of the atom, and the Schrödinger received the Nobel Prize for their discovery. But the story behind this discovery, a little strange. At Christmas 1925 Schrödinger went to a place called Arosa in a small vacation. His relationship with his wife was at a record low, so he decided to invite an old friend from Vienna to keep him company. He also took with him some of the records of de Broglie. When he returned from vacation on Jan. 8, 1926, it announced the discovery of wave mechanics, the theory that describes the electron as a wave. When asked, “How was your vacation, Professor?” He replied: “I was distracted by some calculations.” 2. It turns out that the mass – not what you think We all thought that the mass – is the amount of matter that an object. Well, this is partly true. Unfortunately, the Higgs mechanism other thoughts on the matter. It is, in fact, turns our logic. Higgs mechanism interprets the mass of the particle according to how much the particle interacts with a specific field (the Higgs field, of course). Technically, everything in this world has no mass, yet it does not interact with this strange field. It is for this reason (and not because of her long absence on the monitors), the scientists call the Higgs boson ‘God particle. ” To better explain this theory, David Miller addressed the crowd of politicians in a very simple analogy: “Imagine a cocktail party of political party employees, which are evenly distributed around the room and talk to the closest colleagues. The room includes a former prime minister and moving around the room. All the staff with her attracted to her and accumulate around it. Since around her there is always a crowd of people, it is gaining greater weight than usual; i.e. it has more momentum and the same speed. As she moves, it is more difficult to stop, and when it stopped, it is more difficult to start moving again, because you need to restart the process of accumulation. In three dimensions, the complicated theory of relativity, this is the Higgs mechanism. “ 3. Quantum mechanics allows you to be in two places at once With one caveat: if you – a quantum particle. The Heisenberg uncertainty principle and Young’s double-slit experiment really offer us to imagine a new world; these laws have shown that rather than stay in one place, something to be a certain “likely” be a certain (x, y, z) position. Unfortunately, the uncertainty in these measurements have little effect on everyday objects, but when it comes to the electron, for example, scientists can identify areas in which electrons can be detected, but could not specify the exact position of an electron. This principle is also known as quantum superposition. 4. Schrodinger and his famous cat Erwin Schrödinger be known among quantum chemists his revolutionary equation, but his name among mortals often brings to mind a cat. In response to the problem of the so-called Copenhagen interpretation of quantum mechanics, Schrodinger came to the country, but a very interesting thought experiment. He presented a box that contains a live cat, a radioactive material, a hammer and a corrosive acid. If the radioactive material decays, it would lead to the fact that the hammer will fall on the container with the acid and break it, which in turn will lead to the death of the cat. But Schrodinger said that the chances of decay of radioactive material after exactly one hour is 50%. It is logical to assume that in an hour the cat is either alive or dead, and we can not determine this until you open the box. Himself Schroedinger concluded that, according to quantum mechanics, the cat is both alive and dead until the moment when we open the box and does not know his current condition. Quantum mechanics was the most amazing area of ​​science that literally turns the way we look at everyday things. Although the four examples above seem interesting and clear, one little quote by Richard Feynman sums up everything you need to know about the science: “I think I can safely say that nobody understands quantum mechanics.” Vicky Singh Rao The author Vicky Singh Rao Leave a Response
b441460fd2a91b9d
Visualizing Atomic Orbitals Initializing live version Download to Desktop Requires a Wolfram Notebook System Atomic orbitals are the wavefunctions which are solutions of the Schrödinger equation for the hydrogen atom. The subset of atomic orbitals , , and are plotted in three dimensions to exhibit their characteristic shapes. The orbitals are drawn by showing their boundary surfaces. In the second view + and - signs are attached to the relevant lobes of the orbitals and colorized accordingly. This Demonstration shows the basic characteristics for a chosen set of 16 atomic orbitals: the type, the absolute value of quantum number , the number of lobes/nodes, the Cartesian polynomial form of the wavefunctions, and two 3D views of the probability density (boundary surface: with or without phases). Axes and labels can be displayed as an option via a checkbox. Contributed by: Guenther Gsaller (June 2007) Open content licensed under CC BY-NC-SA In chemistry orbitals can be classified according to their orientation in a rectangular coordinate system. The set of shapes in the snapshots is given for and for combinations of . The three -orbitals for a given value of are described by the values ; gives the orbital. The angular functions for are complex and depend on , , or both. Pairwise linear combinations of complex spherical harmonics yield real functions, which can be plotted as boundary surfaces. For and , for example, we have and . The function pos inside OrbitalModel is shown at the link Problem with SphericalPlot3D plotting. It is used to attach signs to the positive or negative parts of the radial wavefunction. Then both parts are colored differently. Alternative representations for the seven orbitals can be written. In this Demonstration, the most commonly used convention was chosen [3]. [1] P. Atkins, R. Friedman, Molecular Quantum Mechanics, Oxford: Oxford University Press, 2011. [2] R. King, "Atomic orbitals, symmetry, and coordination polyhedra," Coordination Chemistry Reviews, 197, 2000 pp. 141–168. [3] M. Winter. "The Orbitron: a gallery of atomic orbitals and molecular orbitals on the WWW." (Jan 2013) Feedback (field required) Email (field required) Name Occupation Organization
a0fdad61dd04dd31
Bronze-level article Quantum woo From RationalWiki Jump to: navigation, search Dolphins and money New Age Icon new age.svg Cosmic concepts Spiritual selections If a sentence has the word "quantum" in it, and if it is coming out of a non-physicist's mouth, you can almost be certain that there's a huge quantum of BS being dumped on your head. —Physicist Devashish Singh, quoting a colleague[1] I think there are two villains here: (1) Physicists, who are (rightly) desperate to explain to the world the extraordinary, fascinating, and profound implications of quantum mechanics. But they are afraid of intimidating an audience that gags at the sight of an equation; they want to convey the excitement without the substance. So they resort to forced similes and grossly misleading metaphors (quantum tunneling means you can walk through walls—somehow it never works when I try it). (2) Non-physicists who are intrigued by words like “uncertainty” and “indeterminacy,” but are too lazy to do the serious work it takes to understand them. —David J. Griffiths[2] Quantum woo is the justification of irrational beliefs by an obfuscatory reference to quantum physics. Buzzwords like "energy field", "probability wave", or "wave-particle duality" are used to magically turn thoughts into something tangible in order to directly affect the universe. This results in such foolishness as the Law of Attraction or quantum healing. Some have turned quantum woo into a career, such as Deepak Chopra, who often presents ill-defined concepts of quantum physics as proof for God and other magical thinking. When an idea seems too crazy to believe, the proponent often makes an appeal to quantum physics as the explanation. This is a New Age version of God of the gaps. Quantum woo is an attempt to piggy-back on the success and legitimacy of science by claiming quack ideas are rooted in accepted concepts in physics, combined with utter misunderstanding of these concepts and a sense of wonder at the amazing magic these misunderstandings would imply if true. A quick way to tell if a claim about quantum physics has scientific validity is to ask for the mathematics. If there isn't any, it's rubbish. Brian Cox proposed one should challenge Deepak Chopra to first solve the Schrödinger equationWikipedia's W.svg for a spherically symmetrical potential, then talk about quantum healing. The New Age fascination with quantum mechanics seems to date to the mid-to-late 1970s and the books The Tao of Physics by Fritjof Capra and The Dancing Wu Li[3] Masters by Gary Zukav.[4][5] Both books were received skeptically by most in the physics community,[6] with the Zukav book somewhat more heavily scorned. The author of the first of these two, Fritjof Capra, has worked professionally as a physicist, but Zukav has virtually no formal training in the field. Capra's book had the occasional friendly physicist reviewer such as Victor Mansfield, who like Capra is also a proponent of Buddhist philosophy. Many who acknowledged Capra had described quantum physics fairly though his correlations between it and Buddhist mysticism were superficial and silly, and Peter Woit noted the book used quite a bit of out-of-date physics. Physicist John Gribbin described The Tao of Physics as the only purveyor of quantum-based mysticism that had any genuine grasp of quantum physics at all,[7] although the book's physics has been severely criticized by Victor Stenger.[8] In a joint review of both the Capra and Zukav books, physicist Jeremy Bernstein describe both collectively as not serious descriptions of quantum physics. It should also be noted that Eastern religions do not have a single monolithic underlying philosophy, but that each one is divided into multiple schools of thought in ways not acknowledged by the sweeping generalizations about "Eastern religion" in Capra's book. It may be true that both quantum physics and Eastern religion view the universe as "a dynamic interconnected unity", but that does not mean that the details are the same. Both books continue to be embraced by those who needed an all-purpose explanation for their woo. Arguably some purveyors of quantum mysticism are entirely ignorant of quantum physics such as Deepak Chopra and the writers of the film What the Bleep do we Know?, while others may understand quantum physics but draw confused philosophical conclusions from it. Although Oxford mathematician Roger Penrose shared with Stephen Hawking the Wolf Prize for Physics in 1988, Hawking had vigorously opposed the attempts of Penrose to develop an explanation for consciousness from quantum physics (as has also noted physicist and atheist Victor Stenger and philosopher Daniel Dennett). However, Penrose does not engage in the massive distortions of modern physics that are found in Chopra and others. Quantum woo is invoked by alties and woo-pushers in the manner that Nikola Tesla is by crackpot inventors. Popular culture movies such as The Secret and What the Bleep Do We Know? have also appealed to such concepts. Some of the less credible Neopagan authors, including Silver Ravenwolf, have begun doing the same thing. Material that skirts the edge[edit] Strong quantum woo might be defined as literature that pretends maintains that quantum physics has just proved what ancient mystics already knew all along. There is some literature exploring the intersection of quantum physics and religion which falls short of making such grandiose claims. Quantum physicist John Polkinghorne later became an Anglican priest and author of books trying to synthesize science and the supernatural claims of Christianity. However, Polkinghorne mainly employs the standard apologetic arguments from the anthropic principle and Isaac Newton's claim that the laws of physics require a lawgiver and a creation requires a creator. Polkinghorne makes no strong claims about any metaphysical implications for quantum physics, although that was his field as a scientist. The Buddhist-themed book The Quantum and the Lotus is by two authors, an astrophysicist (Trinh Xuan Thuan) and a Buddhist monk (Matthieu Ricard). It suggests that the discoveries of quantum physics and various Buddhist perspectives might be mutually supportive of each other, but this work makes far weaker claims than the Capra and Zukav books, and on several points the two authors visibly agree to disagree. The Vietnamese astrophysicist Trinh Thuan often adopts a more characteristically Western scientific outlook and the French Buddhist monk, Matthieu Ricard, often adheres more strictly to the outlook of classical Buddhist philosophy. Many respectable quantum physicists, including David Bohm, Erwin Schrödinger and Wolfgang Pauli, have noted the similarities between mystical and quantum worldviews. Erwin Schrödinger wrote in What Is Life? that the world envisioned by quantum mechanics is monistic, as taught in mystico-religious traditions: "The multiplicity is only apparent. This is the doctrine of the Upanishads. And not of the Upanishads only. The mystical experience of the union with God regularly leads to this view." The reason for quantum woo is the almost mystical status of quantum mechanics in the collective imagination: almost nobody knows what it actually is, but it's definitely extremely hard science about very awesome stuff. Even having a basic understanding of quantum mechanics requires a working knowledge of differential, integral, multivariable, complex, vector and tensor calculus, differential equations, linear and abstract algebra, classic Newtonian mechanics and electromagnetism. Such topics are waaaaaaaaaaaay out of the league of anyone who hasn't spent at least three years studying them, and this, combined with the efforts of pop science authors to make science accessible to the masses, inevitably leads to quantum mechanics being widely summarized as all the weird, wonderful properties of matter in the tiny nanometric scale—and all it takes to make something appear to be based on Hard Science™ is spouting a little bit of vague technobabble about quantum stuff. The logical process runs something like this: 1. I want magic to exist. 2. I don't understand quantum. 3. Therefore, quantum could mean magic exists. Concepts such as "non-locality" or "quantum probability waves" or "uncertainty principle" have become social memes of a kind where people inherently recognize that something "strange" is going on. Practitioners of fraudulent and silly ideas can tap into this feeling of mystery to push their sham concepts, e.g.: One bad habit often exhibited by pushers of quantum woo is throwing out the theories of Isaac Newton because his work supposedly has been rendered obsolete by quantum theory. In actuality, Newtonian equations for motion work quite well when it comes to predicting the motion of a football, asteroid, or comet (in fact, the computers used in the Apollo mission were programmed with them). Quantum woo and Christianity[edit] Quantum Jesus...[edit] A few people on the fringe claim that Jesus exhibits properties similar to those of quantum particles. • The idea of something being a particle and a wave simultaneously is weird and apparently contradictory. • The idea of Jesus being divine and human simultaneously is weird, and apparently contradictory. • Therefore perhaps the two are connected. Ragnarok, a blogger who professes to be Catholic,[12] has co-opted ideas of wave/particle duality as an analogy to explain the dual nature of Jesus as both man and God: If you can consider light being both a particle and a wave, then it also becomes reasonable to see how Jesus can be both a human and a God. Think about it. Jesus exhibits "human" properties like having a physical body, eating, drinking, and having emotions. On the other hand, He also has "God" properties like the power to resurrect people, controlling the weather, knowing future events, and healing. Like light, Jesus exhibits properties from His dual natures. You could say He is the true "God Particle."[13] Anthony J. Fejfar takes this to an even more unusual level, proclaiming Jesus to be "The Quantum Field" (all capitals).[14] In his short tract he explains how Jesus "Quantum" himself in and out of the tomb and Mary's womb. Apparently he can dematerialize through "Quantum." For the record, no quantum entity is "fully a wave and fully a particle"; rather, they are an entirely different type of thing which happens to exhibit some properties of each, somewhat like how liquids exhibit some properties of both solids and gases (although quantum particles are not "intermediate" to waves and particles). If Jesus is to be understood in this light, the result is a heresy akin to "modalism", whereby the Holy Trinity is understood as being one person with three different "aspects" or "masks", and not as one-Being-and-three persons-simultaneously. ...and quantum creationism[edit] Desmond Paul Allen is a crank with a different kind of quantum woo, mixing it with creationism into some sort of incoherent word salad. Real science[edit] If you want to read a good book on quantum physics, scienceblogger Chad Orzel recently published a very accessible book called How To Teach Physics To Your Dog. Way better than anything Deepak Chopra might write. For a popular science overview, check this New Scientist article. See also[edit] 2. Greg Bernhardt. Interview with a Physicist: David J. Griffiths. Physics Forum Insights. September 17, 2016. 3. Some critics might be tempted to designate it "woo lee". 4. The Tao of Physics (Shambhala Publications, 1975, ISBN 1570625190) 5. The Dancing Wu Li Masters (William Morrow & Co., 1979, ISBN 0553249142) 6. Reviewer Jeremy Bernstein of the New Yorker Magazine, quoted by Martin Gardner in a 1979 review for Newsday, described Zukav's and Capra's physics by saying "A physicist reading these books might feel like someone on a familiar street who finds that all the old houses have suddenly turned mauve." 7. in the preface to his own work In Search of Schrödinger's Cat 8. For example on the April 2010 episode of the podcast For Good Reason [1] 9. vlad. "A New Quantum Flux Level Over-Unity Device is Discovered." 2003 February 01. 10. "Quantum Stirwands™." Quantum Age. 11. "Quantum Therapy." 2009 May. 12. Ragnarok's profile on Blogger. 13. Ragnarok. "Quantum Jesus." The Dark Side of the Universe. 2009 July 24. 14. Anthony J. Fejfar. "The Quantum Jesus: A Tract Book Essay." 2007.
05623c03bdbec063
Back to namelist Stoyan Kurtev Wave-like patterns in behavioural data: evidence for quantum processes in the brain? Numerosity estimation as a task for studying the behaviour of the mental state is analogous to probing properties of a particle state by measuring position and velocity using particle detectors. In that analogy, the difference between the guess and the actual number of objects (e.g., dots) corresponds to the position variable and the response time corresponds to the velocity/energy variable of the mental state. The findings from 3 behavioural psychology experiments indicate that when people estimate the number of dots on a screen, the response times vary more when they make more effort than when the task is easier, and also that the variability of the response times depends on the amount of effort. More specifically, the variability exhibits an oscillatory pattern along the effort dimension. The first finding is analogous to the enlarged momentum variability of a squeezed state in terms of position. The second finding is analogous to the wave patterns of probability density of position and energy/momentum arising from the Schrödinger equation. The findings lend support to the idea of an analogy between the behaviour of mental states and the behaviour of particle states, and indirectly also to the hypothesis that the physical substrate of the conscious mental state is ensembles of quantum entangled particles. A speculative explanation is offered as to how this may be realised in the brain. Print Abstract
58d8f5a2dd75d0c5
Sunday, July 12, 2009 Aether based explanation of dark matter Before month I listed four explanations of dark matter, which are plural from AWT perspective: 1. consequence of limited light speed spreading through expanding space-time 2. surface tension effect of bell curve shaped gravity field 3. application of mass-energy equivalence to Einstein field equation 4. result of variable surface/volume ratio to energy spreading by principle of least action But we can use even more illustrative explanation, linked to dispersion of energy by background field of CMB photons formed by gravitational waves (GWs), which manifests like weak deceleration equivalent to product of Hubble constant and speed of light. This dispersion is direct manifestation of hidden dimensions on both large scales, both small scales, because it manifests as a shielding effect of these photons at Casimir force distance scale. We can say, Casimir force is a shielding effect of GWs, whereas the Pioneer anomaly is subtle deceleration effect caused by dispersion by GWs. Both these forces results in violation of Newton law at small scales, which manifests itself by anomalous deceleration at large scales and as such it violates the equivalence principle of general relativity - it's as easy, as it is. We can even find a direct analogy of this deceleration in our "pocket model" of observable Universe at water surface. From local perspective of every observer, whose size is evolutionary adjusted to wavelength of capillary waves (human distance scale) such surface is covered mostly by transversal waves, where the energy spreads in maximal speed from his insintric perspective, so he can interact with largest space-time possible (the speed of transversal waves is minimal from exsintric perspective, instead). But the particle character of water environment manifests by dispersion of surface waves by tiny density fluctuations of underwater, which results into gradual change of transversal character of capillary waves into longitudinal one (i.e. into gravity waves). This dispersion decreases the speed of waves from exsintric perspective, which manifests like omni directional Universe expansion from insintric perspective or like subtle deceleration, which effectively freezes the spreading of surface waves, which can be interpreted like spreading of these waves in environment of gradually increasing density. We can observe this effect easily by splash ripples, formed by capillary waves. On the example bellow such waves are formed by bursting of bubbles at water surface, which can be interpreted like radiative decay of unstable particle in vacuum into gamma photons. By this interpretation dark matter effects, like Pioneer anomaly are related closely to the Universe expansion: for example the anomalous deceleration of Pioneer spacecraft (0.87 ± 0.13 nm/s2) is equal to product of Hubble constant and speed of light (a = Hc), which agrees well (±10% error) with value observed. From this perspective every object is surrounded by virtual massive field which originates from massive field of virtual photons, i.e. the field of density fluctuations, which are manifesting in GWs formed by gravitons expanded by inflation and which is forming vacuum foam - and in this context it's quite natural and easily predictable effect following from AWT directly. Just the immense density of vacuum and common disbelief in Aether concept has caused, the effect of background field dispersion wasn't linked to dark matter observations and Pioneer anomaly before many years. Here's still plenty of room "at the bottom" of basic human understanding. Note that in this context the further search for GWs has no meaning, because we have observed them already like background noise of GWs detectors and their scope is limited by Casimir force scope in the same way, like scope of extradimensions and Lorentz symmetry violation at low scale. As J.C. Cranwell (archive) pointed out, prof. Stephen Hawking has blundered by his own image... This picture comes from his book "A briefer history of time" at page 29 and it illustrates the energy wave spreading in particle environment. It's easy to see the waves getting further apart from each other as time increase, while Hawking is still claiming, the Lorentz invariance is "difficult to reconcile" with Newton theory. Of course it is, because it leads not only into Lorentz invariance, but into dark matter and expanding universe observations. This example just illustrates, how everyone sees, what he wants to see and Hawking the physmatic sees waves of constant wavelength in picture, which illustrates exactly the opposite. Albert Einstein "You do not really understand something unless you can explain it to your grandmother." Zephir said... What I found on the web just by now... Gravitons as a spacetime fabric, string theory is just a failed aether theory, Lubos Motl and Peter Woit... El Cid said... Better prove that I'm wrong: Well, I'm going to use logic, if AWT is correct, then you could solve a very simple physics problem. I'm going to challenge you to solve the following problem using only the AWT postulates, as you say. The solution are a couple of numbers. Neither stories of the strange things, nor paints are permitted. You have to show how you obtain the two numbers using the postulates of AWT, ie, you must use deductive reasoning from the AWT postulates. If you can solve this problem, you win, otherwise you're a quack. Well, The Problem, one, two, three, go out ... A stone thrown from the floor is given an initial velocity of 20.0 m/s straight upward. Determine the time at which the stone reaches its maximum height and the time at which the stone returns to the point from which it was thrown. Zephir said... /*..if AWT is correct, then you could solve a very simple physics problem...*/ For example quantum mechanics doesn't recognize gravitational constant, so your trivial task would be unsolvable with using of quantum mechanics. Does it mean, quantum mechanics is crackpot theory, if it cannot face such trivial assignation? If not, why just AWT should be? Zephir said... Despite of it, AWT is still the only concept, which can explain in independent way, why gravity force is indirectly proportional to square of distance (compare the Duillier - Le Sage theory of gravity). El Cid said... Another chance, you should forget QM and solve the problem using deductive reasoning from AWT principles, I only want two numbers and its units of measurement. Zephir said... Why I should forget QM? Try to prove first, your assignation is solvable in this mainstream theory. If it's not, you shouldn't blame AWT from incompetence. El Cid said... I'm going to solve the proposed problem using QM, with some valid aproximations. We use the following notation: V(x) is the potential enegy |f] is the wave packet for the particle that defines the state of the particle. X is the position operator. P is the momentun operator. V = V(X) is the potential enegy operator. [X] = [f|X|f] is the expectation vaue for the postion operator X in the state |f]. [X] is the center of the wave packet at the instant t. [P] = [f|P|f] is the expectation vaue for the momentun operator P in the state |f] a=a is approximately equal Int(-Inf,Inf) is the improper integral over the real numbers. v0 = 20 m/s is the initial velocity x0 = 0 m is the initial position g = 9,8 m/s^2 is the acceleration due to gravity at sea level (the only parameter that is need to introduce). El Cid said... From the Ehrenfest's Theorem, we get: d [X] /dt = 1/m [P] d[P]/ dt = - [grad V] = - [dV/dx] [P] = m d[X]/dt ; m d^2[X]/dt^2 = - [dV/dx] I'm going to show that [dV/dx] a=a (dV/dx)(x=[X]), indeed: [dV/dx] = Int(-Inf,Inf) f*(x)(dV/dx)f(x)dx a=a (dV/dx)(x=[X]) Int(-Inf,Inf)f*(x)f(x)dx This aproximation is valid because the wave packet f(x) are much smaller than the distances over which (dV/dx) varies appreciably. The wave packet f(x) doesn't vanishes in an interval centered in [X]. (dV/dx) doesn't varies appreciably in this interval. m d^2[X]/dt^2 = - (dV/dx)(x=[X]) namely the Newton's second law. The potential energy for the stone is V = mg[X] where [X] is the height of the stone. (dV/dx)(x=[X]) = mg; d^2[X]/dt^2 = -g; d[X]/dt = -gt + vo; [X] = -1/2 g t^2 + v0 t + x0 d[X]/dt = -9,8t + 20 [X] = -1/2 * 9,8 t^2 + 20 t And now the two numbers: 1) The time at which the stone reaches its maximum is when d[X]/dt = 0 0 = -9,8t +20; t1 = 2,04 s 2) The time at which the stone returns to the point from which it was thrown 0 = -1/2 * 9,8 t^2 + 20 t; 0 = t(20-1/2*9,8t); t2 = 4,08 s. Zephir, you're a true quack. Zephir said... /*...the potential energy for the stone is V = mg[X] ...*/ OK, and from where you get this equation? Isn't it derived from Newton's theory? If yes, why not to use the Newton's theory from its very beginning? Ehrenfest's theorem itself is derived under assumption, Hamiltonian has the same form as in classical physics H = V^2/pm = 1/2m.Sum(i=1)^3 V i^2... In this way, whole your derivation is just a sort of circular reasoning: you're deriving effect of classical physics by using of theorems, which were derived just by using of classical physics approximation (in fact it's just reversed case of classical derivation of Ehrenfest's theorem as given in various textbooks). Zephir said... /*...the wave packet f(x) doesn't vanishes in an interval centered in [X] ...*/ This is just an assumption of yours borrowed from classical physics again - but not from QM. By Schrodinger equation such object would vanish in initial speed, corresponding the speed of light. By quantum mechanics such object wouldn't reach it's maximal height - instead of it would create a stable Rydberg orbital in X/2 height, surrounding the whole Earth. Zephir said... When people dating, they refute to know, what they are getting into.. This is what the love is called... El Cid said... In QM, there is an observable (hermitian operator) called hamiltonian H. In one dimension, the hamiltonian is defined as H = P^2/2m + V(X) where P and X are the momentum operator and the position operator, respectively. We can define V(X) = mg X if we want, not matter if it's functionally equal to the gravitational potential energy in classical physics. But in QM, V(X) is an Hermitian Operator while in CM is a function. The Schrödinger equation is H|vi] = E|vi], where Ei are the eigenvalues and |vi] are the eigenfunctions of the operator H. To obtain Ei and |vi], we must solve a differential equation of type y'' + Ax y = 0. The wave packet can be expresed as |f ]=Sum(i,|vi]). The wave packet represents the state of the particle, in this case the stone. I've considered that stone is punctual, i.e., an elemental particle. In QM, [X] is not the position of the particle (stone), but, we can consider a ball centred at [X], where it's very likely to find the particle. I'd like that you realised that [X] is moved according to the Newton's second law. It can be shown that CM is a limit of the QM. By the way, in this particular case, we don't need to make the approximation: ... the wave packet f(x) are much smaller than the distances over which (dV/dx) varies appreciably ... because the equality: [dV/dx] = is exact. It's equal to mg. Sorry, If I've insulted you, but I was very angry because you criticised to me. I think you should agree that I've solved the problem using QM. Zephir said... /*..we can define V(X) = mg X if we want..*/ Sorry - I know, it's quite natural for you to think in such straightforward way and to mix various theories and theorems into single one - but this equality has nothing to do with quantum mechanics, because quantum mechanics doesn't know, what the "g" is. Not saying, the result is unphysical from QM perspective with respect to insintric vanishing of every QM packet, as you mentioned above. If you get angry so easily, when somebody criticizes you, you should be more careful, when you do the same against someone else. My description of reality cannot dependent on fact, we can derive formal model of it - or not. For example the turbulence, formation of galaxy or density fluctuations inside of gas exists, albeit we still have no formal description of such phenomena. Consecutive logics of formal math is apparently less effective, when parallel systems of many particles are involved. We can still model these phenomena in computer simulation at particle level by cellular automata models, which doesn't require to introduce any physical model with measured constants into description (lattice-Boltzmann models, for example). El Cid said... Well Zephir, You win, I've been unable to resolve the problem using QM. But I'm not physicist, I'm the ignorant one. But the problem can be resolved using QM, and any physicist could have solved this trivial problem. Now, why don't you solve it using AWT? Zephir said... I win, because I have/use more general insight into situation. If you derive whatever equation, I can demonstrate rather easily, such description has its own limits. I've lost, because my general approach doesn't enable me to model particular situations exactly. I can say, we can use Boltzman gas simulation on strong computer at least conceptually, blah, blah... But I can still cannot demonstrate any exact particular solution in real time without ad-hoced simplifications, which in turn would violate fundamental AWT principles at nonlocal scale. As you can see, whole AWT is about dualities of reciprocal approaches. The intuitive approach diverges from exact approach and you should always decide, which approach is more usefull for you. Common people would revise the results of formal thinkers in intuitive way, while formal thinkers would rectify their intuitive extrapolations by formal models. James said... Right, I'm agreeing with El Cid on this one as he seems to have a much better grasp of physics than you do. He asked you to solve a simple problem and you couldn't, you spluttered and coughed but there was no solid answer therefore leading me to deduce that you haven't the faintest notion what your talking about please feel free to prove me wrong with mathematics preferably. Besides wasn't an Aether disproved in the early 18th century? Zephir said... /* ...he seems to have a much better grasp of physics than you do...*/ This is irrelevant to what I'm writing here. If you can refute a single sentence from my whole blog, you're welcomed to do so. Concerning the ElCid textbook example, if I would be convinced, free fall can be solved in quantum mechanics, I'd propose some solution already (1,2, 3, 4). But as far I know, quantum physics does involve neither gravity force, neither gravity constant in its repository, so such attempt is ridiculous at the first sight from my perspective. You can only do it by combining of equations from different theories, Newtonian dynamics in particular. If you or ElCid didn't realize it, why it should be just my problem in understanding of physics? Zephir said... /*...wasn't an Aether disproved in the early 18th century?...*/ Wasn't Aether disproval disproved in early 21 century by me? James said... No it wasn't no experimentation, no maths therefore no proof or even for that matter a viable theory Zephir said... Fortunately contemporary physics has a number of unexplained experiments already, which can be used as a logical evidence of many new theories, not just AWT. It means, no new experiments and formal math are necessary for AWT reasoning, predicate logics and existing observations are enough. Zephir said... AWT explains dark matter and omnidirectional universe expansion by model of ripple waves dispersion at water surface. This dispersion decreases the speed of waves from exsintric perspective, which manifests like omni-directional Universe expansion from insintric perspective or like subtle deceleration, which effectively freezes the spreading of light waves, which can be interpreted like spreading of these waves in environment through mass/energy density gradient of vacuum, i.e. like dark matter. Such model leads to testable predictions: for example the deceleration of Pioneer spacecraft is equal to product of Hubble constant and speed of light, which agrees well (+15% error) with value observed. Zephir said... In 28 pages review you get a extensive review of the current theory and understanding of rapidly expanding universe via cosmic acceleration (available online for free within the first month of publication). Zephir said... Theory of field interactions by T.B.Bon, containing some arithmetic about Doppler effect of "detuned light" spreading through infinite Universe. Zephir said... Modified gravity as an alternative to dark matter TeVeS is one of the best extrapolations of relativity, but it still cannot address well all aspects of dark matter, where its particle character manifests. In AWT dark matter is formed both by space-time deformation, both by particles of matter trapped into it. Zephir said... Zephir said... The behavior of dark matter can be understood quite well by the parabiosis of scientists and protoscientists (so-called crackpots), which particularly the Web 2.0 technology enabled. The scientists tend to form cohesive group and they tend to repel crackpots from their center. The crackpots are usually individualists and they don't form coalitions - so they're acting in diaspora. They're attracted to scientists and scientific findings though and they tend to surround them. They're particularly sensitive to trends in accidental findings and you can usually find the "dark strings" of crackpots there. In AWT universe the dark matter plays a role of incubator of new galaxies, while the existing clusters of normal matter will gradually dissolve into radiation and neutrinos, which serve as a material for new dark matter clusters. You may observe often, that the elderly scientists often becoming crackpots or they're engaged in suspicious research (like the cold fusion) at least. You may think, that the dark matter is formed with mutually gravitationally repulsive particles (it has opposite gravitational charge to normal matter), so it tends to fill the cosmic space in uniformly thin manner (it represents the "missing antimatter" of the Universe in this way). The proximity of normal matter (which is gravitationally attractive by itself) leads to the concentration of dark matter at the perimeter of massive objects. When three or more massive objects appear at single line, then the dark matter tends to concentrate along this line too, because its mutual repulsion is shielded with said massive objects along this line and it forms the dark matter fibers. Of course, such a behavior of dark matter has nothing to do with MOND theory and it essentially violates it instead. Zephir said... List #2: extra-dimensions, scalar field, quintessence, mirror matter,  quantum gravitation, axions, dilatons, inflatons, heavy and dark photons, leptoquarks, dark atoms, fat strings and gravitons, magnetic monopoles and anapoles, sterile neutrinos,  colorons, fractionally charged particles, chameleon particles, dark fluid and dark baryons, fotinos, gluinos, gauginos, gravitinos and sparticles and WIMPs, SIMPs, MACHOs, RAMBOs, DAEMONs, Randall-Sundrum 5-D phenomena (dark gravitons, K-K gluons a microblack holes.)
b0ee164493084a6a
Cubical atom From Wikipedia, the free encyclopedia   (Redirected from Cubic atoms) Jump to: navigation, search The cubical atom was an early atomic model in which electrons were positioned at the eight corners of a cube in a non-polar atom or molecule. This theory was developed in 1902 by Gilbert N. Lewis and published in 1916 in the article "The Atom and the Molecule" and used to account for the phenomenon of valency.[1] Lewis's theory was based on Abegg's rule. It was further developed in 1919 by Irving Langmuir as the cubical octet atom.[2] The figure below shows structural representations for elements of the second row of the periodic table. Cubical atom 1.svg Although the cubical model of the atom was soon abandoned in favor of the quantum mechanical model based on the Schrödinger equation, and is therefore now principally of historical interest, it represented an important step towards the understanding of the chemical bond. The 1916 article by Lewis also introduced the concept of the electron pair in the covalent bond, the octet rule, and the now-called Lewis structure. Bonding in the cubical atom model[edit] Single covalent bonds are formed when two atoms share an edge, as in structure C below. This results in the sharing of two electrons. Ionic bonds are formed by the transfer of an electron from one cube to another, without sharing an edge (A). An intermediate state B where only one corner is shared was also postulated by Lewis. Cubical atom 2.svg Double bonds are formed by sharing a face between two cubic atoms. This results in sharing four electrons: Cubical atom 3.svg Triple bonds could not be accounted for by the cubical atom model, because there is no way of having two cubes share three parallel edges. Lewis suggested that the electron pairs in atomic bonds have a special attraction, which result in a tetrahedral structure, as in the figure below (the new location of the electrons is represented by the dotted circles in the middle of the thick edges). This allows the formation of a single bond by sharing a corner, a double bond by sharing an edge, and a triple bond by sharing a face. It also accounts for the free rotation around single bonds and for the tetrahedral geometry of methane. Cubical atom 4.svg See also[edit] 1. ^ Lewis, Gilbert N. (1916-04-01). "The Atom and the Molecule.". Journal of the American Chemical Society. 38 (4): 762–785. doi:10.1021/ja02261a002.  See images of original article 2. ^ Langmuir, Irving (1919-06-01). "The Arrangement of Electrons in Atoms and Molecules.". Journal of the American Chemical Society. 41 (6): 868–934. doi:10.1021/ja02227a002.
1482d7132f82fcfa
Quantum Mechanics Winter, 2012 Quantum theory governs the universe at its most basic level. In the first half of the 20th century physics was turned on its head by the radical discoveries of Max Planck, Albert Einstein, Niels Bohr, Werner Heisenberg, and Erwin Schroedinger. An entire new logical and mathematical foundation—quantum mechanics—eventually replaced classical physics. We will explore the quantum world, including the particle theory of light, the Heisenberg Uncertainty Principle, and the Schrödinger Equation. Lectures in this Course 1. 1 Introduction to quantum mechanics Professor Susskind opens the course by describing the non-intuitive nature of quantum mechanics.  With the discovery of quantum mechanics, the fundamental laws of physics moved into a realm that defies human intuition or visualization.  Quantum... [more] 2. 2 The basic logic of quantum mechanics Professor Susskind introduces the simplest possible quantum mechanical system: a single particle with spin.  He presents the fundamental logic of quantum mechanics in terms of preparing and measuring the direction of the spin.  This fundamental... [more] 3. 3 Vector spaces and operators Professor Susskind elaborates on the abstract mathematics of vector spaces by introducing the concepts of basis vectors, linear combinations of vector states, and matrix algebra as it applies to vector spaces.  He then introduces linear operators... [more] 4. 4 Time evolution of a quantum system Professor Susskind opens the lecture by presenting the four fundamental principles of quantum mechanics that he touched on briefly in the last lecture.  He then discusses the evolution in time of a quantum system, and describes how the classical... [more] 5. 5 Uncertainty, unitary evolution, and the Schrödinger equation Professor Susskind begins the lecture by introducing the Heisenberg uncertainty principle and explains how it relates to commutators. He proves that two simultaneously measurable operators must commute. If they don't then the observables... [more] 6. 6 Professor Susskind begins the lecture with a review of the problem of a single spin in a magnetic field. He re-emphasizes that observables corresponding to the Pauli sigma matrices do not commute, which implies that they obey the uncertainty... [more] 7. 7 Entanglement and the nature of reality This lecture takes a deeper look at entanglement.  Professor Susskind begins by discussing the wave function, which is the inner product of the system's state vector with the set of basis vectors, and how it contains probability amplitudes for the... [more] 8. 8 Particles moving in one dimension and their operators Professor Susskind opens the lecture by examining entanglement and density matrices in more detail.  He shows that no action on one part of an entangled system can affect the statistics of the other part.  This is the principle of locality and is... [more] 9. 9 Fourier analysis applied to quantum mechanics and the uncertainty principle Professor Susskind opens the lecture with a review of the entangled singlet and triplet states and how they decay.  He then shows how Fourier analysis can be used to decompose a typical quantum mechanical wave function. 10. 10 The uncertainty principle and classical analogs Professor Susskind begins the final lecture of the course by deriving the uncertainty principle from the triangle inequity.  He then shows the correspondence between the motion of wave packets and the classical equations of motion.  The expectation... [more]
ff5d97ce6786067f
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer In just about every interpretation of quantum mechanics, there appears to be some form of dualism. Is this inevitable or not? In the orthodox Copenhagen interpretation by Bohr and Heisenberg, the world is split into a quantum and classical part. Yes, that is actually what they wrote and not a straw man. The Heisenberg cut is somewhat adjustable, though, somewhat mysteriously. This adjustability is also something that can be seen in other interpretations, and almost kind of suggests the cut is unphysical, but yet it has to appear somewhere. There is a duality between the observer and the observables. Von Neumann postulated a two step evolution; one Schroedinger and unitary, and the other a measurement collapse by measurement, whatever a measurement is. Another form of the duality. What a measurement is and when exactly it happens is also adjustable. In decoherence, there is a split into the system and the environment. A split has to be made for decoherence to come out, but again, the position of the split is adjustable with almost no physical consequence. In the many-worlds interpretation, there is the wavefunction on the one hand, and a splitting into a preferred basis on the other followed by a selection of one branch over the others. This picking out of one branch is also dualistic, and is an addendum over and above the wavefunction itself. In the decoherent histories approach, there is the wavefunction on the one hand, and on the other there is an arbitrary choice of history operators followed by a collapse to one particular history. The choice of history operators depends upon the questions asked, and these questions are in dual opposition to the bare wavefunction itself oblivious to the questions. In Bohmian mechanics, there is the wavefunction, and dual to it is a particle trajectory. Why is there a duality? Can there be a nondual interpretation of quantum mechanics? share|cite|improve this question Some people ascribe the duality to the duality between the classical appratus and the quantum microscropic system, but I think this is a little old-fasioned. The quantum description also works for a bad apparatus and a big apparatus--- like my eye looking at a mesoscopic metal ball with light shining on it. This situation does not measure the position of the ball, nor the momentum, nor anything precise at all. In fact, it is hard to determine exactly what operator my eye is measuring by looking at some photons. A modern approach to quantum mechanics treats the whole system as a quantum mechanical, including my eye, and myself. But then the source of the dualism is made apparent. If I simulate my own wavefunction on a computer, and that of the ball, and the light, (the simulation would be enormously large, but ignore that for now), where is my perception of the ball contained in the simulation? It is not clear, because the evolution would produce a enormously large set of wavefunction values in extremely high dimension, most of which are vanishingly small, but a few of which are smeared over configurations describing one of many plausible possible outcomes. The linear time evolution would produce a multiplying collection of weighted configurations, but it will never contain a data bit corresponding to my experience. But I can introspect and find out my own experience, so this data bit is definitely accessible to me. So I can see a data bit using my mind which is not clearly extractable from this computer simulation of my mind. The basic problem is that the knowledge in our heads is classical information, it might as well be data on a computer. But the quantum system is not made up of classical information, but of wavefunction data, and wavefunction data is not classical information, nor is it a probability distribution on classical information, so it does not have an obvious interpretation as ignorance of classical information. The reason probability is unique is because only probability calculus has the Monte-Carlo property that if you sample the distribution and average over time-evolution of the samples, its the same as averaging over the time-evolution of the distribution. In quantum mechanics, samples can interfere with other samples, making the restriction to a collection of independent classical samples inconsistent. So I can't say the simulation is simulating one of many samples, at best I can say it is approximately simulating one of many clumps-of-samples corresponding to nearly completely decohered histories. But when I entangle myself with a quantum system using a device which entangles itself with a quantum system, I find _by_doing_it_ that the result is probabilistic on the classical information in my mind. The classical information is determined after the entanglement event, the result is random with probabilities given by the Born rule, so the result is definitely a probability. But the result is only at best asymptotic to a probability in quantum mechanics. Why Duality? The duality in quantum descriptions is always between the linear evolution of the quantum mechanical wavefunction and the production of classical data according to a probability distribution. Wavefunctions are not probabilities, but when they produce classical data, they can only be probabilities, so they turn into probabilities. How exactly do they turn into probabilities? This is the mismatch between the probabilistic calculus for knowledge and information, and the quantum mechanical formalism for states. In order to produce probabilities from pure quantum mechanics, you have to find the proper reason for why wavefunctions are linked to probabilities. Each interpretation has a bit of a different flavor for explaining the link, but of these, Copenhagen, many-worlds, CCC, many-minds, and decoherence/consistent-histories all place the reason in the transition to a macroscopic observer-realm. The details are slightly different--- Copenhagen has a ritualized system/apparatus/observer divide, a classical-quantum divide which looks artificial. Many-worlds has an observer's path of memories, which selects which world is observed. Many-minds too, I can't distinguish between many-minds and many-worlds, not even philosophically. I think many-minds was invented by someone who misunderstood many-worlds as being something other than many-minds. Consciousness-Causes-Collapse is the same as well, except rejecting the alternate counterfactual mental histories as "nonexisting" (whatever that means exactly, I can't differentiate this one from many worlds either). Decoherence/consistent-histories insists that the path is a decoherence consistent selection which is simply a good direction in which the wavefunction has become incoherent and the density matrix is diagonal, but it is specified outside the theory. Its always the same dualism--- the classical data is not in the simulation, and we can see it in our heads, and the reduction to a diagonal density matrix is only asymptotically true, and it needs to be exactly true to work. The variables that describe our experience of the macroscopic world are discrete packets of information with a definite value, or probability distributions on such, which are modeling our ignorance before we get the value. There's nothing else that is out there which can describe our experience. The quantum simulation just doesn't contain these classical bits, nor does it contain anything which is exactly and precisely a classical probability distribution. Quantum mechanically simulate a particle in a superposition interacting with a miniature model brain, and light from the particle triggers a molecule in the brain to store the information about the position of the molecule, the quantum formalism will produce a superposition of at least two different configurations of the molecule and of the brain but at no point will it contain the actual value of the observed bit, nor a probability distribution for this value. If this quantum wavefunction simulation is a proper simulation of the brain, then this internal brain has access to more information than the complete simulation contains viewed from the outside. As far as I see, there are exactly two possible explanations for this. Many Worlds The idea starts with the observation that you can't know in advance what it's supposed to feel like to be in a superposition, because what a physical phenomenon "feels like" is not part of physics. There is always a dictionary between physics and "feels like" which tells you how to match physical descriptions to experience. For example, matching light of a certain wavelength to the experience of seeing red. If you simulate a classical brain, and you copy the data in the classical brain simulation, by querying the copies, you will see that they cannot differentiate between their pasts, and they will both think they are the same person. The quantum simulation contains all sorts of things inside, and it is not clear how it feels to internal things, because that all depends on how you query the things. If you query extremely unlikely components of the superposition, you can get any answer at all to any question you ask. You have to ask questions, because without a positive way to investigate the brain's feelings, there is no meaning you can assign to the assertion that it has feelings at all. When you ask the question, you must choose which branch of the simulated quantum system to query. So there is no obvious way to embed classical experiences into the simulation, and the many-worlds interpretation takes the point of view that it is just a perceptual axiom, like seeing red, that the way our classical minds are embedded into a quantum universe is that they feel a unique path through a decohering net of spreading quantum events. A classical mind just doesn't "feel" superposed, it can't feel superposed because feelings are classical things. The embedding into the model is just a little off because of this, and our minds have to select a path through the diverging possible histories. The path-selection by the mind produces new classical information through time, and the duality in quantum mechanics is identified with the philsophers' mind-body duality. Quantum mechanics is measurably wrong I think this is the only other plausible possibility. The existence of classical data in our experience make it philosphically preferrable to have a theory which can say something about this classical data, which can interpret it as a sharp value of a quantity in the theory, rather than a history-specification which is outside the physics of the theory. This can be philosophically preferred for two reasons: • It allows a physical identification of mental data with actual bits which can be extracted from the simulation, so that the definite bit values encoding our experiences are contained in a fundamental simulation directly, as they are in the classical model of the world. • It means that simulations of the physical world could be fully comprehended--- they are classical computations on classical data, or probability distributions which represent ensembles of classical data. I think the only real reason to prefer such a theory is if it could described the world with a smaller model than quantum mechanics, one which would require fewer numbers to simulate. It seems like an awful waste to require exponentially growing resources to simulate N-particles, especially when the result in real life is almost always classical behavior with a state variable linear in N. But the only way a theory can do this is if the theory fails to coincide with quantum mechanics at least when doing Shor's algorithm. So this position is that quantum mechanics is wrong for heavily entangled many-particle systems. In this case, the dualism of quantum mechanics would be because it is an approximation to something else deeper down which is not dual, but the approximation makes wavefunctions out of probability distributions in some unknown limit, and this limit is imperfect. So the wavefunctions are approximations to probabilities, not the other way around, and we see the real deal-- the probabilities, because on our scale, the wavefunction description is no good. Nobody has such a theory. The closest thing is the Born version of quantum mechanics, which is computationally even bigger than quantum mechanics, and so even less philosophically satisfying. It might be good even to find a half-way house, just a method of simulating quantum systems which does not require exponential resources except in those cases where you set up a quantum computer to do exponential things. Nobody has such a method either. share|cite|improve this answer I wasn't under the impression that my answer at all depended on having a relatively well-defined measurement. Can you demonstrate, for my benefit, where I'm slipping the assumption in? Your example of your eye percieving photons (perhaps reflected from a plate which is intercepting electrons) is an example where degrees of freedom of the electrons are becoming coupled with the photoplate, and then in turn with your visual cortex, mediated through chemistry and light. – Niel de Beaudrap Nov 20 '11 at 14:04 And exactly which wavefunction values for the atoms of the visual cortex correspond to seeing the light? It's a blob in configuration space defined along certainly nearly orthogonal directions for different perceptions. The map is from classical knowledge to these blobs, and the dictionary is not in the time evolution. – Ron Maimon Nov 20 '11 at 21:20 Indeed, there should be a wide swath (even if you mod out by the idiosyncratic brain structure in the person seeing the light). I don't pretend that there is a simple state corresponding to making an observation; one can't easily say "here: the observation is made just at this point where potassium reaches this threshold concentration". That would be bad neuroscience, to say nothing of bad QM! But this does not contradict the fact that observations, ill-defined as they are, are indeed made; and that they are the result of strong couplings with other systems. That is what I was stressing. – Niel de Beaudrap Nov 21 '11 at 1:20 @Niel: I had no problem with your answer, I think it is saying correct things. I just don't think it exhausts the question, because I think the main problem people have with quantum mechanics is that they can introspect and see firm stable classical data, like the contents of this message, and quantum mechanics, when simulated, produces superpostions of many values of such data, and these superpositions are only asymptotically interpretable as probability densities, and we are not asymptotic beings. – Ron Maimon Nov 21 '11 at 4:40 My reaction was based on your first paragraph, when my answer was one of the only two earlier answers. I'm sorry if I somehow misunderstood. – Niel de Beaudrap Nov 21 '11 at 13:36 The duality has something to do with strength of interaction of a system with its environment, which may or may not consist largely of a piece of measurement apparatus of which we are consciously aware. In short, the duality arises from fixating on two extremes of behaviour: strongly coupling with the environment, or not. (Realizing this doesn't necessarily simplify our understanding of QM, but it is the theme underlying the dualities you have noted.) What all of the interpretations agree on is this: a system which is isolated evolves according to the Schrödinger equation, and a system which interacts strongly enough with a macroscopic system — such that we can observe a difference in the behaviour of that large system — does not. These are two polar extremes of behaviour; so it is not in principle surprising that they exhibit somewhat different evolutions. This seems to me where the duality comes from: stressing these two opposite poles. • In the Copenhagen interpretation, the "quantum" systems are the isolated ones; the "classical" systems are the large macroscopic ones whose conditions we can measure. Nothing is said about the regime in between. • In von Neumann's description, the evolution of isolated systems is by the Schrödinger equation; ones strongly coupled to macroscopic systems get projected. Again, nothing is said about the regime in between. "Decoherence" and "Many-Worlds" are not really distinguishable interpretations of quantum mechanics (indeed, in Many-Worlds, the preferred basis is thought to be selected by decoherence, though this must still be demonstrated as a technical point). While there is some debate about the precise ontological nature of the phenomenon, and important technical issues to resolve, pretty much everyone in the "decoherence" camp (with or without many worlds) agrees that the statistical nature of quantum mechanics — as opposed to the determinism of the unitary dynamics itself — arises from interaction with the environment. The fuzziness of the boundary between the two situations of "isolated system" and "strong coupling to the environment", in fact, is a symptom of the fact that "not completely isolated" does not automatically take you all the way to the regime of "strongly coupled to the environment". There is, presumably, a gradient. Furthermore, you get to choose what the boundaries of "the environment" — that part of the world which is just too big and messy for you to try to understand, or more to the point, experimentally control — are. So, if a physical system is only a little leaky, or is interfered with only slightly by the outside world, you can try to account for this outside meddling, and so describe the system as one which may be somewhat less leaky. Some of the projects of interpretations of quantum mechanics are trying precisely to describe the two extremes, and so everything in between, using a monism of dynamics. Many-worlds, for instance, seems to shrug at the question of why we only perceive one world out of many, but wholeheartedly believes that all dynamics is in principle unitary, and is trying to prove it. And Bohmian Mechanics already has monism, albeit at the cost of faster than light signalling between particles by way of the quantum potential field — albeit signalling which manifests macroscopically only as correlations, for essentially thermodynamical reasons — which understandably puts most people off. Note that there are also dualisms in science, historically and in modern times, outside of quantum mechanics: • historically: terrestrial and celestial mechanics (subsumed by Newtonian mechanics) • historically: organic versus inorganic matter (subsumed once the chemistry of carbon started to become well-understood) • currently: gravity (treated geometrically) versus other elementary forces (treated by boson mediation) • currently: "hard sciences" (theories of the world largely excluding human behaviour) versus soft "sciences" (theories of the world largely concerning human behaviour) Any time you have two different models of the world which do not seem obviously compatible, but which do (at least somewhat successfully) describe systems well in some domain, there is a sort of duality between those two models. The dualities in our current understanding of quantum mechanics are somewhat unique in that they concern exactly the same systems, and in the fact that interactions in one of the regimes ("strong coupling with the environment") seems to be the only way for us to obtain information about what happens in the other ("weak coupling with the environment")! share|cite|improve this answer Definitely Useful. I like it and I think it's quite accurate to modern pragmatic approaches to QM, although I think there are Physicists who take a PoV close to Ron's more metaphysical Answer. I have a quibble, however, that I think your qualification of your conflation of Decoherence and Many Worlds as interpretations is not full enough. Straight Decoherence interpretations have a much more conservative interpretation of probability and its relationship to statistics than Many Worlds. – Peter Morgan Nov 20 '11 at 14:44 @PeterMorgan: I mean that saying "decoherence versus Many-Worlds" is like saying "Christianity versus Mormonism". Modern Many-Worlds advocates believe that decoherence is the mechanism which generates 'worlds': it is a school of decoherence, though not the only one. Still: it isn't clear to me at all that "striaght decoherence" have any more conservative an interpretation. Just as MWI doesn't explain conscious experience of only one world, the others lack an explanation for why entanglement gives rise to stochastic behaviour (the partial trace formula merely articulates that it does). – Niel de Beaudrap Nov 20 '11 at 15:06 @Downvoter: any critique you would like to make? – Niel de Beaudrap Nov 20 '11 at 17:55 I didn't downvote, but I think it is not so useful to compare QM to other dualities, because it isn't clear that the duality is purely philosophical. – Ron Maimon Nov 20 '11 at 22:01 How does one tell when a "philosophical" duality ceases to be one? The OP notes that there is an apparent duality of processes or natures; historically, what should an alchemist have said about the response of muscle tissue to electrical shocks, or a modern particle theorist say about the non-renormalizability of theories with gravitons? In each case there are things which one must treat by different formalisms but no clear way to distinguish what happens at the boundary. Apparent duality arises out of the lack of a unifying theory, bolstered by opinions that there may/should be none. – Niel de Beaudrap Nov 21 '11 at 1:25 The duality is inherent in the way we do physics. We never consider the whole universe with all its details. In order to make sense of what we observe (whic his always a small part of the universe only) we - the users of physics - must make a distinction between ''the observed = the system'' and ''the remainder = the environment''. The observed system is then described as closely as warranted, while the remaining environment is described in a simple, effective way - e.g., as an external classical field (in many applications), as a classical measurement (in the Copenhagen interpretation), as a bath of harmonic oscillators in equilibrium (in decoherence studies), or as ignored details (in thermodynamics and in cosmology). This is necessary in order that we can get rid of unwanted details without lsing predictability of the system of interest. Thus the duality you mentioned is imposed on the universe by inqusitive minds. share|cite|improve this answer To take a different approach to the variety of ways in which you present QM (which all seem fine, but perhaps they miss the underlying structure), we compute expected values of an observable $O$ using the trace rule in QM, $E[O]=\mathsf{Tr}[\hat O\hat\rho]$, in which on one side there is an operator that represents a measurement and on the other side there is a density matrix that represents a state, essentially because of the Hilbert space structure of vectors and an inner product. Loosely, the inner product of the Hilbert space allows us to ask what components a prepared vector state has "in the same direction" as each of a (possibly infinite) set of reference states. Hilbert spaces are the mathematical structure at the very bottom of all quantum mechanics, and the inner product (that every Hilbert space has as part of its construction) induces a linear duality between prepared states and reference states. That duality may play out in a different interpretations in different ways, but it will always be there. In short, if we have a Hilbert space structure, we have a linear duality. If we don't have a Hilbert space structure, we're not doing quantum mechanics. Not that we can't use other mathematical structures, but it will not be QM unless it can be presented in terms of the mathematics of Hilbert spaces, effectively as a matter of definition. And welcome to PhysicsSE. EDIT: As a result of Niel's and Ron's Comments, I looked at what I've missed in the Question (not infrequently I find that my first response misses some "detail" or another, and sometimes it's the whole point). My initial Answer addresses the cut into System and Observer, which I see as inevitable just because of the underlying mathematics I point out above, but it does not explicitly address the difference between unitary and collapse evolutions. I see these two evolutions so much as an obvious consequence of the mathematical duality that I didn't notice that I was conflating something that would not be obvious. I find Niel's Answer somewhat more congenial to my own thinking, which I would say, still too concisely, as: the difference between unitary and collapse evolutions comes from placing the Heisenberg cut in such a way that there is an (effectively) infinite number of DoFs on the human Observer's side of the mathematical duality, while there is only a relatively small number of DoFs on the other side. That's a somewhat Decoherence-y way of looking at things, to which I do not fully subscribe, but I find it a useful approach nonetheless. I find both Niel's and Ron's Answers Useful, although as different sides of a coin, and I commend them both to you. The duality between the wave function and Bohmian trajectories is rather different, and rather unbalanced, as Niel points out, and it looks as if Ron hasn't much addressed it. I find that I can't see how to address that duality in a unified way, partly because its attractions have never seemed compelling enough for me to work within the mathematics of the Bohmian POV. share|cite|improve this answer I don't think the OP is talking about mathematical duality (a mapping from functions to functionals), but philosophical duality (that there exists two fundamentally different sorts of things in the world, rather than one single sort of thing). It is not obvious that there is a connection between the two. – Niel de Beaudrap Nov 20 '11 at 1:28 Peter Morgan is stating that there are reference states defined by the experimental apparatus which is measuring the system, and the wavefunction only determines the outcome in relation to the reference states selected by the apparatus. You can alter the apparatus to measure a different observable, and this moves the eigenstates of one observable to those of another, a rotation of the Hilbert space, and you can also rotate the Hilbert space by changing the wavefunction, the mathematical duality. This is a very Copenhagen, very operational, and very old fasioned view of QM, it's not reality. – Ron Maimon Nov 20 '11 at 6:10 @Niel I take him to be asking whether interpretations of the mathematics of QM have to include duality in some form. Looking at the mathematics, there is a linear duality in the Hilbert space structure. Your Comment has prodded me to look at the Question more carefully, thanks, and I'll edit my Answer some, although your Answer and Ron's are both very Useful, +1, enough that it seems a little pointless to upgrade my squib. – Peter Morgan Nov 20 '11 at 13:50 @Peter: the terse previous comment is a result of the space limit--- I think that the pure operationalist would agree that the duality between reference states and prepared states is closely related to the duality between the classical measuring apparatus and the quantum microscopic system. But it is difficult to reconcile this with the idea that quantum mechanics should apply universally. Perhaps quantum mechanics should not apply universally, this was Bohr's position after all. – Ron Maimon Nov 21 '11 at 16:19 It had to happen, I was the only one here who hadn't been down-voted. I console myself that it took 2 days for someone to decide that It Won't Do. Although I can see numerous reasons why someone might down-vote this or any of the other Answers, and I'm pretty sure we're all winding down, would the down-voter care to add to the conversation? – Peter Morgan Nov 21 '11 at 23:20 You correctly noticed that in some interpretations there is a "split" between "quantum" and "classical" and this split is somewhat arbitrary. You can move it closer to the observer without loosing consistency. If you make it to the extreme and move it as close to the observer as possible you will find that the whole universe follows certain laws such as unitary evolution, when separated from the observer, and only the observer, a single isolated person does not. This is what you should obtain and it is correct. What is bad with it? Only one problem. It makes the most fruitful instrument of research ever invented by humans, the scientific method, non-applicable. Scientific method requires independent confirmation of the observations and repeatability. If there is a special person in the universe then the scientific community would be unable to predict the observations by that person based on their own experiments or their predictions will be wrong however advanced instruments they use. That's why the quantum interpretations. All of them are designed to reconcile the scientific method with quantum mechanics to a degree which allows to obtain practical results. Still scientific method remains in conflict with quantum mechanics, but this conflict can be kept contained so that practical results in applied science are possible. share|cite|improve this answer it isn't 100% clear that the split is solipsistic, since you can transfer the solipsism around between different people, and you end up consistent. This is one of the motivations Everett gives for many-worlds in 1957, each solipsist thinks the other guy is superposed until the measurement, so you just transfer the solipsism to an observer far away, and you leave yourself superposed, and this is many-worlds. – Ron Maimon Mar 26 '12 at 0:01 I never used the word "solipsism", so what's your objection? – Anixx Mar 26 '12 at 22:00 Yes, indeed the QM formalism predicts that there is a special person (this is not exactly solipsism). Some physicists do not want special persons so they invent Many-Worlds, Relational QM and other interpretations that postulate that every man is special in their own (unobservable to our science) universe. – Anixx Mar 26 '12 at 22:10 Your Answer
91f5327f2f7fe068
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Why do we consider evolution of a wave function and why is the evolution parameter taken as time, in QM. If we look at a simple wave function $\psi(x,t) = e^{kx - \omega t}$, $x$ is a point in configuration space and $t$ is the evolution parameter, they both look the same in the equation, then why consider one as an evolution parameter and other as configuration of the system. My question is why should we even consider the evolution of the wave function in some parameter (it is usually time)?. Why can't we just deal with $\psi(\boldsymbol{x})$, where $\boldsymbol{x}$ is the configuration of the system and that $|\psi(\boldsymbol{x})|^2$ gives the probability of finding the system in the configuration $\boldsymbol{x}$? (Added) (I had drafted but missed while copy pasting) One may say, "How to deal with systems that vary with time?", and the answer could be, "consider time also as a part of the configuration space". I wonder why this could not be possible. Clarification (after answer by Alfred Centauri) My question is why consider the evolution at all (What ever the case may be and what ever the parameter may be, time or proper time or whatever). My motivation here is to study the nature of the theory of quantum mechanics as a statistical model. I am looking at it from that angle. share|cite|improve this question Related: – Emilio Pisanty Jul 19 '12 at 12:47 So do I understand you to be asking for a block world formulation of quantum theory? For which you could use the Wightman axioms (albeit they're not close to the successes of Lagrangian QFT). They introduce a single Hilbert space that supports a representation of the Poincaré group, and time is not privileged over space (except for the 1+3 signature). Lagrangian QFT somewhat obscures a block world perspective, insofar as it focuses on a Hilbert space at a single time, corresponding to phase space observables, however a block world perspective of Lagrangian QFT is possible. – Peter Morgan Jul 19 '12 at 20:25 @RajeshD: The Heisenberg formulation takes your point of view, the wavefunction is time independent, but the observables depend on time. This just means that the interaction with the particle at different times is by different operators. – Ron Maimon Jul 20 '12 at 5:24 up vote 2 down vote accepted I think the main reason is practical, but it might be related to a theoretical reason. The main reason is that we almost never use the time-dependent Schroedinger equation because if the state wasn't stationary, its rate of change would be, at the usual atomic scales, so fast that we couldn't measure it or study it empirically with laboratory-sized apparatus. Similarly, what governs the observable properties of macroscopic bodies, such as their chemical bonds and colours, involves stationary states. If the states weren't stationary, the body would not persist long enough for us to consider it as having a property. It is striking how little direct empirical support the time-dependent Schroedinger equation has, and how little use it finds. We don't even use it to study scattering events (which, admittedly, for a very brief time occur very rapidly). This might be related to a deeper theoretical reason one finds in statistical mechanics. In statistical mechanics, it is often pointed out that measurements made with laboratory-sized equipment necessarily involve a practically infinite time average such as $$\lim_{T\rightarrow\infty}\frac1T\int_0^T f(t)g(t)dt.$$ Well, in Quantum Mechanics, measurement has something similar about it, in that it always involves amplification of something microscopic up to the macroscopic scale so we can observe it (an observation made by many, including Feynman), and the main way to do this seems to be to let the microscopic event trigger the change from a meta-stable state to a stable equilibrium state of the laboratory-sized apparatus (H.S. Green Observation in Quantum Mechanics, Nuovo Cimento vol. 9, pp. 880--889, posted at , and many others since). Once again, this involves a long-time, stable, equilibrium as in Statistical Mechanics. But the relation to the practical reason is not completely clear. That said, in theory it is sometimes possible to rephrase the time-dependent Schroedinger evolution equation into a space-evolution equation, even though no one ever does this since it has no earthly use. Consider the Klein--Gordon equation (which is the relativistic version of Schroedinger's equation), $$({\partial\over \partial x}^2-{\partial\over \partial t}^2 + V )\psi = 0.$$ Obviously, we can get isolate either $x$ or $t$, and under certain conditions take the square root of the operator to get $$ {\partial\over \partial x} \psi = \sqrt{ ({\partial\over \partial t}^2 - V)}\psi .$$ Under the usual physical assumptions of flat space--time and no field-theoretic effects, one could do this to isolate $t$ and get the time evolution because we assume that energy is always positive, so we can indeed take the square root (all the eigenvalues of the Hamiltonian are positive). This may not always be true when, as here, we try to isolate $x$ and get the space-evolution. Now, as to the question of why consider any evolution at all, why not just consider $\psi(x,y,z,t)$ in a relativistically timeless fashion, the main answer is that it wreaks havoc with the idea of measurement, observable. and the justification of the Born interpretation. Dirac tried to write a Quantum Mechanics textbook your way, but gave up even after the fifth chapter, where he remarks that the notion of observable in not relativistic, and for the rest of the book he proceeds non-relativistically (until he get to the Dirac Equation at the end). The second edition abandons the attempt to be relativistic, is more traditional and uses the time evolution point of view from the start. He remarked, famously, The main change has been brought about by the use of the word «state» in a three-dimensional non-relativistic sense. It would seem at first sight a pity to build up the theory largely on the basis of nonrelativistic concepts. The use of the non-relativistic meaning of «state», however, contributes so essentially to the possibilities of clear exposition as to lead one to suspect that the fundamental ideas of the present quantum mechanics are in need of serious alteration at just tbis point, and that an improved theory would agree more closely ' with the development here given than with a development which aims at preserving the relativistic meaning of «state» throughout. And in fact Relativistic Quantum Mechanics, as opposed to field theory, is, like many-particle Relativistic (classical) mechanics, not theoretically very well developed. There seem to be so many problems, people prefer to jump right to Quantum Field Theory in spite of the divergences and need for renormalisation and everything. Furthermore, relativistic QM is restricted to the low energy regime since with high energies, particle pair production is possible, yet the equations of QM hold the number of particles as fixed and do not allow for pair production. share|cite|improve this answer Thanks for the nice answer. It was a joy reading it. You really got the spirit of the question. – Rajesh Dachiraju Jun 7 '13 at 22:39 (1) In the Heisenberg picture, the wavefunction does not evolve with time, the operators do. (2) For relativistic covariance, $t$ ought to be a coordinate with proper time $\tau$ as the evolution parameter. (3) In QFT, which is relativistically co-variant, $t$ is a coordinate. If these don't begin to address your question, please re-edit your question to clarify. share|cite|improve this answer I have edited with a clarification in view of your answer. – Rajesh Dachiraju Jul 19 '12 at 13:10 it's an empiral fact that time exists, and states evolve in time. or is that really the case, or does it just seem so? interesting question. anyway, feynman path integrals, no such problem. share|cite|improve this answer Sorry I missed a crucial part of the question while copy pasting the draft. Now I have added it. I hope you excuse this. – Rajesh Dachiraju Jul 19 '12 at 12:39 You can, sort of. You can take $\psi(x)$ to satisfy the time-independent Schrödinger equation, for some eigenvalue $E_n$ of the Hamiltonian operator that appears in the time-dependent Schrödinger equation. However I would take that to make the time-independent formalism less fundamental. It's also possible for the time-dependent state to be in a superposition of different energy states, which doesn't play well with the time-independent formalism. share|cite|improve this answer I think you have digressed a bit from what I had in mind. I do not suggest to consider time independent Schrodinger equation. I am not interested in that and that is not the only choice. My question is just why consider evolution of wave function at all? – Rajesh Dachiraju Jul 19 '12 at 13:00 Your Answer
b5fc01d60b0a0f3d
Applications of Quantum Mechanics Over at the theoretical physics beach party, Moshe is talking about teaching quantum mechanics, specifically an elective course for upper-level undergraduates. He’s looking for some suggestions of special topics: The course it titled “Applications of quantum mechanics”, and is covering the second half of the text by David Griffiths, whose textbooks I find to be uniformly excellent. A more accurate description of the material would be approximation methods for solving the Schrodinger equation. Not uncommonly in the physics curriculum, when the math becomes more demanding the physics tends to take a back seat, so we are going to spend quite a bit of the time on what is essentially a course in differential equations, using WKB approximations and perturbation theory and what not. To counter that, I am looking for short and sweet applications of quantum mechanics. Short topics which can be taught in an hour or less, and involve some cool concepts in addition to practicing the new mathematical techniques. I’m hampered in this by not knowing what’s in the second half of Griffiths (the analogous class at Williams was taught out of Park’s book, because he’s there; I used to have a copy of Griffiths in my office, but it seems to have wandered off). I’m currently teaching a much lower-level version of a similar course, though, so I can suggest a few things: The “Applications” portion of the class I’m currently teaching is really a mad sprint through whatever QM-related topics I can fit into the last three weeks or so. A couple of these, scaled up appropriately, might work. One obvious application is solid state physics. It’s relatively easy to sketch out the basic ideas that lead to band structure in solids. The full solution is a bit beyond an undergrad course, but you can do the Kronig-Penney model pretty easily. That works well to show how periodic arrays of potential wells gives you bands of allowed states, with gaps between them. The basic idea of band structure is enough to explain a bunch of useful technology– diodes, transistors, LED’s, etc. Another area is nuclear physics. I don’t do it in the sophomore-level class that I teach, but you can do a remarkably good job of calculating half-lives of radioactive elements using alpha particle tunneling as a model. Somewhere, I have a Mathematica notebook with code to numerically solve the Schrödinger equation for a bunch of different nuclei, which does a great job of getting the decay rates, and the trend with atomic number. Those two might very well be in Griffiths already, though. A couple other things come to mind as possible topics, though: If you’re talking about perturbation theory and approximations, you ought to be able to do the Fermi Golden Rule for transitions between atomic states driven by an oscillating electromagnetic field. From there, you can go for the “lies your teachers taught you” topic of demonstrating that you don’t need photons to explain the photoelectric effect. The model is spelled out in a paper by Mandel in the 60’s (I don’t have the cite here, but I can find it if people want to see it), and doesn’t require anything beyond basic perturbation theory. If the class includes state-vector notation, you can do the No-Cloning Theorem pretty easily (it’s remarkably simple). That’s a good way of getting into all sorts of fun quantum information topics: teleportation, quantum cryptography, some basic quantum computing, etc. Several commenters to the original post suggested the Quantum Zeno Effect, which is another good one if you’ve done state vectors. Projective measurement is fun stuff. It’s also relatively easy to get into a lot of cool quantum optics material– the Hanbury Brown and Twiss experiment can be explained in a very straightforward way, and you can actually calculate the relevant correlation functions for a bunch of different cases. And that gets you to the basic techniques that are used for everything in quantum optics. That’s what I come up with off the top of my head, without knowing the textbook in question. What did I miss? 1. #1 fizzchick January 8, 2009 I love Griffiths’ textbooks. You did pretty well: The second half also includes notes on the Zeeman effect, tunneling, Berry phase and adiabaticity, and scattering. The latter was used as an excuse by my instructor to describe his research involving the formation of ultracold molecules. It was nice to see QM applied to actual current research problems, even if they were (IMO) somewhat esoteric ones. Of course, if you want more math instead of less, Griffiths also introduces Green’s function and Cauchy integrals in an offhand manner in Chapter 11 (along the way to the Born approximation). 2. #2 Moshe January 8, 2009 Thanks Chad, looking forward also to all the comments here. Incidentally, you have no idea how appropriate “theoretical beach party” is as a description of David’s work place. Less so in mine (there is a beach nearby, but the snow tends to ruin the party). 3. #3 Asad January 8, 2009 I know and like Griffiths’ books as well, having used three of them in my coursework (QM, EM, and Particles). We used Park for our second-semester QM course, and I found that Griffiths borrows quite heavily from Park for his own QM book. If only Griffiths would write a stat mech textbook, I might actually be able to learn it. 4. #4 Asad January 8, 2009 5. #5 Jonathan Vos Post January 8, 2009 Griffiths is good. I’m reviewing this stuff, for some esoteric research of mine on what the solutions to 4-D of space + 1-D of time equivalent of Schrodinger equation for hyper-atoms of hyper-electrons in orbitals around hyper-nuclei. Hyperspherical polynomials. I’m interested in hyper-Linus Pauling: the Nature of the Chemical Bond for “artificial chemistry” in 4+1 dimensional space. It’s fairly nuts what a science fiction author will do to make his hand-waving look plausible. The standard proofs that there are no possible 4-D or 5-D atoms are fatally flawed, because they neglect the negative energy solutions standard in Quantum Field Theory. There have been a flurry of papers since the two co-authored by Hockney, correcting Foppel, on the problem originally posed by J. J. Thomson about arranging unit-charge electrons in a minimum energy configuration on the surface of a unit sphere. This was before the Bohr atom was invented. Some of the new papers look at hyperelectrons in hyperspheres up to 64 dimensions.Weird and lovely stuff, not what “normal” physics looks at. But then Caltech did hire a postdoc whose PhD was on QM if there were an infinite number of space dimensional, and then he went off and wrote for Star Trek, and “Feynman’s Rainbow”, so who knows. And wouldn’t it be nice to know the theoretical spectrum of hyperhydrogen? One could look for it, and of course expect not to see it. But you never know until you do the measurements… 6. #6 bcooper January 8, 2009 Have you tried Reif? I’m a big Griffiths fan as well and I’ve generally found Reif to be a book in the same spirit. In terms of applied quantum mechanics and physical ideas I tend to think quantum information stuff might be a good way to go. Maybe some things like Bell’s/CHSH inequality, teleportation, BB84. I guess these are sort of grab bag when thrown into what the rest of the course sounds like but I bet students would find them interesting. Maybe a lecture on how lasers work? New comments have been temporarily disabled. Please check back soon.
bbd32cbca0eec12c
Friday, April 20, 2012 Schrödinger has never met Newton Sabine Hossenfelder believes that Schrödinger meets Newton. But is the story about the two physicists' encounter true? Yes, these were just jokes. I don't think that Sabine Hossenfelder misunderstands the history in this way. Instead, what she completely misunderstands is the physics, especially quantum physics. She is in a good company. Aside from the authors of some nonsensical papers she mentions, e.g. van Meter, Giulini and Großardt, Harrison and Moroz with Tod, Diósi, and Carlip with Salzman, similar basic misconceptions about elementary quantum mechanics have been promoted by Penrose and Hameroff. Hameroff is a physician who, along with Penrose, prescribed supernatural abilities to the gravitational field. It's responsible for the gravitationally induced "collapse of the wave function" which also gives us consciousness and may be even blamed for Penrose's (not to mention Hameroff's) complete inability to understand rudimentary quantum mechanics, among many other wonderful things; I am sure that many of you have read the Penrose-Hameroff crackpottery and a large percentage of those readers even fail to see why it is a crackpottery, a problem I will try to fix (and judging by the 85-year-long experience, I will fail). It's really Penrose who should be blamed for the concept known as the Schrödinger-Newton equations So what are the equations? Sabine Hossenfelder reproduces them completely mindlessly and uncritically. They're supposed to be the symbiosis of quantum mechanics combined with the Newtonian limit of general relativity. They say:\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) &= \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \\ \Delta \Phi(t,\vec x) &= 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Don't get misled by the beautiful form they take in \(\rm\LaTeX\) implemented by MathJax; superficial beauty of the letters doesn't guarantee the validity. Sabine Hossenfelder and others immediately talk about mechanically inserting numbers into these equations, and so on, but they never ask a basic question: Are these equations actually right? Can we prove that they are wrong? And if they are right, can they be responsible for anything important that shapes our observations? Of course that the second one is completely wrong; it fundamentally misunderstands the basic concepts in physics. And even if you forgot the reasons why the second equation is completely wrong, they couldn't be responsible for anything important we observe – e.g. for well-defined perceptions after we measure something – because of the immense weakness of gravity (and because of other reasons). Analyzing the equations one by one So let us look at the equations, what they say, and whether they are the right equations describing the particular physical problems. We begin with the first one,\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) = \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \] Is it right? Yes, it is a conventional time-dependent Schrödinger equation for a single particle that includes the gravitational potential. When the gravitational potential matters, it's important to include it in the Hamiltonian as well. The gravitational potential energy is of course as good a part of the energy (the Hamiltonian) as the kinetic energy, given by the spatial Laplacian term, and it should be included in the equations. In reality, we may of course neglect the gravitational potential in practice. When we study the motion of a few elementary particles, their mutual gravitational attraction is negligible. For two electrons, the gravitational force is more than \(10^{40}\) times weaker than the electrostatic force. Clearly, we can't measure the transitions in a Hydrogen atom with the relative precision of \(10^{-40}\). The "gravitational Bohr radius" of an atom that is only held gravitationally would be comparably large to the visible Universe because the particles are very weakly bound, indeed. Of course, it makes no practical sense to talk about energy eigenstates that occupy similarly huge regions because well before the first revolution (a time scale), something will hit the particles so that they will never be in the hypothetical "weakly bound state" for a whole period. But even if you consider the gravity between a microscopic particle (which must be there for our equation to be relevant) such as a proton and the whole Earth, it's pretty much negligible. For example, the protons are running around the LHC collider and the Earth's gravitational pull is dragging them down, with the usual acceleration of \(g=9.8\,\,{\rm m}/{\rm s}^2\). However, there are so many forces that accelerate the protons much more strongly in various directions that the gravitational pull exerted by the Earth can't be measured. But yes, it's true that the LHC magnets and electric fields are also preventing the protons from "falling down". The protons circulate for minutes if not hours and as skydivers know, one may fall pretty far down during such a time. An exceptional experiment in which the Earth's gravity has a detectable impact on the quantum behavior of particles are the neutron interference experiments, those that may be used to prove that gravity cannot be an entropic force. To describe similar experiments, one really has to study the neutron's Schrödinger equation together with the kinetic term and the gravitational potential created by the Earth. Needless to say, much of the behavior is obvious. If you shoot neutrons through a pair of slits, of course that they will accelerate towards the Earth much like everything else so the interference pattern may be found again; it's just shifted down by the expected distance. People have also studied neutrons that are jumping on a trampoline. There is an infinite potential energy beneath the trampoline which shoots the neutrons up. And there's also the Earth's gravity that attracts them down. Moreover, neutrons are described by quantum mechanics which makes their energy eigenstates quantized. It's an interesting experiment that makes one sure that quantum mechanics does apply in all situations, even if the Earth's gravity plays a role as well, and that's where the Schrödinger equation with the gravitational potential may be verified. I want to say that while the one-particle Schrödinger equation written above is the right description for situations similar to the neutron interference experiments, it already betrays some misconceptions by the "Schrödinger meets Newton" folks. The fact that they write a one-particle equation is suspicious. The corresponding right description of many particles wouldn't contain wave functions that depend on the spacetime, \(\Psi(t,\vec x)\). Instead, the multi-particle wave function has to depend on positions of all the particles, e.g. \(\Psi(t,\vec x_1,\vec x_2)\). However, the Schrödinger equation above already suggests that the "Schrödinger meets Newton" folks want to treat the wave function as an object analogous to the gravitational potential, a classical field. This totally invalid interpretation of the objects becomes lethal in the second equation. Confusing observables with their expectation values, mixing up probability waves with classical fields The actual problem with the Schrödinger-Newton system of equations is the second equation, Poisson's equation for the gravitational potential,\[ \Delta \Phi(t,\vec x) = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Is this equation right under some circumstances? No, it is never right. It is a completely nonsensical equation which is nonlinear in the wave function \(\Psi\) – a fatal inconsistency – and which mixes apples with oranges. I will spend some time with explaining these points. First, let me start with the full quantum gravity. Quantum gravity contains some complicated enough quantum observables that may only be described by the full-fledged string/M-theory but in the low-energy approximation of an "effective field theory", it contains quantum fields including the metric tensor \(\hat g_{\mu\nu}\). I added a hat to emphasize that each component of the tensor field at each point is a linear operator (well, operator distribution) acting on the Hilbert space. I have already discussed the one-particle Schrödinger equation that dictates how the gravitational field influences the particles, at least in the non-relativistic, low-energy approximation. But we also want to know how the particles influence the gravitational field. That's given by Einstein's equations,\[ \hat{R}_{\mu\nu} - \frac{1}{2} \hat{R} \hat{g}_{\mu\nu} = 8\pi G \,\hat{T}_{\mu\nu} \] In the quantum version, Einstein's equations become a form of the Heisenberg equations in the Heisenberg picture (Schrödinger's picture looks very complicated for gravity or other field theories) and these equations simply add hats above the metric tensor, Ricci tensor, Ricci scalar, as well as the stress-energy tensor. All these objects have to be operators. For example, the stress-energy tensor is constructed out of other operators, including the operators for the intensity of electromagnetic and other fields and/or positions of particles, so it must be an operator. If an equation relates it to something else, this something else has to be an operator as well. Think about Schrödinger's cat – or any other macroscopic physical system, for that matter. To make the thought experiment more spectacular, attach the whole Earth to the cat so if the cat dies, the whole Earth explodes and its gravitational field changes. It's clear that the values of microscopic quantities such as the decay stage of a radioactive nucleus may imprint themselves to the gravitational field around the Earth – something that may influence the Moon etc. (We may subjectively feel that we have already perceived one particular answer but a more perfect physicist has to evolve us into linear superpositions as well, in order to allow our wave function to interfere with itself and to negate the result of our perceptions. This more perfect and larger physicist will rightfully deny that in a precise calculation, it's possible to treat the wave function as a "collapsed one" at the moment right after we "feel an outcome".) Because the radioactive nucleus may be found in a linear superposition of dictinct states and because this state is imprinted onto the cat and the Earth, it's obvious that even the gravitational field around the (former?) Earth is generally found in a probabilistic linear superposition of different states. Consequently, the values of the metric tensors at various points have to be operators whose values may only be predicted probabilistically, much like the values of any observable in any quantum theory. Let's now take the non-relativistic, weak-gravitational-field, low-energy limit of Einstein's equations written above. In this non-relativistic limit, \(\hat g_{00}\) is the only important component of the metric tensor (the gravitational redshift) and it gets translated to the gravitational potential \(\hat \Phi\) which is clearly an operator-valued field, too. We get\[ \Delta \hat\Phi(t,\vec x) = 4\pi G \hat\rho(t,\vec x). \] It looks like the Hossenfelder version of Poisson's equation except that the gravitational potential on the left hand side has a hat; and the source \(\hat\rho\), i.e. the mass density, has replaced her \(m \abs{\Psi(t,\vec x)}^2\). Fine. There are some differences. But can I make special choices that will produce her equation out of the correct equation above? What is the mass density operator \(\hat\rho\) equal to in the case of the electron? Well, it's easy to answer this question. The mass density coming from an electron blows up at the point where the electron is located; it's zero everywhere else. Clearly, the mass density is a three-dimensional delta-function:\[ \hat\rho(t,\vec x) = m \delta^{(3)}(\hat{\vec X} - \vec x) \] Just to be sure, the arguments of the field operators such as \(\hat\rho\) – the arguments that the fields depend on – are ordinary coordinates \(\vec x\) which have no hats because they're not operators. In quantum field theories, whether they're relativistic or not, they're as independent variables as the time \(t\); after all, \((t,x,y,z)\) are mixed with each other by the relativistic Lorentz transformations which are manifest symmetries in relativistic quantum field theories. However, the equation above says that the mass density at the point \(\vec x\) blows up iff the eigenvalue of the electron's position \(\hat X\), an eigenvalue of an observable, is equal to this \(\vec x\). The equation above is an operator equation. And yes, it's possible to compute functions (including the delta-function) out of operator-valued arguments. Semiclassical gravity isn't necessarily too self-consistent an approximation. It may resemble the equally named song by Savagery above. Clearly, the operator \(\delta^{(3)}(\hat X - \vec x)\) is something different than Hossenfelder's \(\abs{\Psi(t,\vec x)}^2\) – which isn't an operator at all – so her equation isn't right. Can we obtain the squared wave function in some way? Well, you could try to take the expectation value of the last displayed equation:\[ \bra\Psi \Delta \hat\Phi(t,\vec x)\ket\Psi = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Indeed, if you compute the expectation value of the operator \(\delta^{(3)}(\hat X - \vec x)\) in the state \(\ket\Psi\), you will obtain \(\abs{\Psi(t,\vec x)}^2\). However, note that the equation above still differs from the Hossenfelder-Poisson equation: our right equation properly sandwiches the gravitational potential, which is an operator-valued field, in between the two copies of the wave functions. Can't you just introduce a new symbol \(\Delta\Phi\), one without any hats, for the expectation value entering the left hand side of the last equation? You may but it's just an expectation value, a number that depends on the state. The proper Schrödinger equation with the gravitational potential that we started with contains the operator \(\hat\Phi(t,\vec x)\) that is manifestly independent of the wave function (either because it is an external classical field – if we want to treat it as a deterministically evolving background field – or because it is a particular operator acting on the Hilbert space). So they're different things. At any rate, the original pair of equations is wrong. Nonlinearity in the wave function is lethal Those deluded people are obsessed with expectation values because they don't want to accept quantum mechanics. The expectation value of an operator "looks like" a classical quantity and classical quantities are the only physical quantities they have really accepted – and 19th century classical physics is the newest framework for physics that they have swallowed – so they try to deform and distort everything so that it resembles classical physics. An arbitrarily silly caricature of the reality is always preferred by them over the right equations as long as it looks more classical. But Nature obeys quantum mechanics. The observables we can see – all of them – are indeed linear operators acting on the Hilbert space. If something may be measured and seen to be equal to something or something else (this includes Yes/No questions we may answer by an experiment), then "something" is always associated with a linear operator on the Hilbert space (Yes/No questions are associated with Hermitian projection operators). If you are using a set of concepts that violate this universal postulate, then you contradict basic rules of quantum mechanics and what you say is just demonstrably wrong. This basic rule doesn't depend on any dynamical details of your would-be quantum theory and it admits no loopholes. Two pieces of the wave function don't attract each other at all You could say that one may talk about the expectation values in some contexts because they may give a fair approximation to quantum mechanics. The behavior of some systems may be close to the classical one, anyway, so why wouldn't we talk about the expectation values only? However, this approximation is only meaningful if the variations of the physical observables (encoded in the spread of the wave function) are much smaller than their characteristic values such as the (mean) distances between the particles which we want to treat as classical numbers, e.g.\[ \abs{\Delta \vec x} \ll O(\abs{\vec x_1-\vec x_2}) \] However, the very motivation that makes those confused people study the Schrödinger-Newton system of equations is that this condition isn't satisfied at all. What they typically want to achieve is to "collapse" the wave function packets. They're composed of several distant enough pieces, otherwise they wouldn't feel the need to collapse them. In their system of equations, two distant portions of the wave function attract each other in the same way as two celestial bodies do – because \(m \abs{\Psi}^2\) enters as the classical mass density to Poisson's equation for the gravitational potential. They write many papers studying whether this self-attraction of "parts of the electron" or another object may be enough to "keep the wave function compact enough". Of course, it is not enough. The gravitational force is extremely weak and cannot play such an essential role in the experiments with elementary particles. In Ghirardi-Rimini-Weber: collapsed pseudoscience, I have described somewhat more sophisticated "collapse theories" that are trying to achieve a similar outcome: to misinterpret the wave function as a "classical object" and to prevent it from spreading. Of course, these theories cannot work, either. To keep these wave functions compact enough, they have to introduce kicks that are so large that we are sure that they don't exist. You simply cannot find any classical model that agrees with observations in which the wave function is a classical object – simply because the wave function isn't a classical object and this fact is really an experimentally proven one as you know if you think a little bit. But what the people studying the Schrödinger-Newton system of equations do is even much more stupid than what the GRW folks attempted. It is internally inconsistent already at the mathematical level. You don't have to think about some sophisticated experiments to verify whether these equations are viable. They can be safely ruled out by pure thought because they predict things that are manifestly wrong. I have already said that the Hossenfelder-Poisson equation for the gravitational potential treats the squared wave function as if it were a mass density. If your wave function is composed of two major pieces in two regions, they will behave as two clouds of interplanetary gas and these two clouds will attract because each of them influences the gravitational potential that influences the motion of the other cloud, too. However, this attraction between two "pieces" of a wave function definitely doesn't exist, in a sharp contrast with the immensely dumb opinion held by pretty much every "alternative" kibitzer about quantum mechanics i.e. everyone who has ever offered any musings that something is fundamentally wrong with the proper Copenhagen quantum mechanics. There would only be an attraction if the matter (electron) existed at both places because the attraction is proportional to \(M_1 M_2\). However, one may easily show that the counterpart of \(M_1M_2\) is zero: the matter is never at both places at the same time. Imagine that the wave function has the form\[ \ket\psi = 0.6\ket \phi+ 0.8 i \ket \chi \] where the states \(\ket\phi\) and \(\ket\chi\) are supported by very distant regions. As you know, this state vector implies that the particle has 36% odds to be in the "phi" region and 64% odds to be in the "chi" region. I chose probabilities that are nicely rational, exploiting the famous 3-4-5 Pythagorean triangle, but there's another reason why I didn't pick the odds to be 50% and 50%: there is absolutely nothing special about wave functions that predict exactly the same odds for two different outcomes. The number 50 is just a random number in between \(0\) and \(100\) and it only becomes special if there is an exact symmetry between \(p\) and \((1-p)\) which is usually not the case. Much of the self-delusion by the "many worlds" proponents is based on the misconception that predictions with equal odds for various outcomes are special or "canonical". They're not. Fine. So if we have the wave function \(\ket\psi\) above, do the two parts of the wave function attract each other? The answer is a resounding No. The basic fact about quantum mechanics that all these Schrödinger-Newton and many-worlds and other pseudoscientists misunderstand is the following point. The wave function above doesn't mean that there is 36% of an object here AND 64% of an object there. (WRONG.) Note that there is "AND" in the sentence above, indicating the existence of two objects. Instead, the right interpretation is that the particle is here (36% odds) OR there (64% odds). (RIGHT.) The correct word is "OR", not "AND"! However, unlike in classical physics, you're not allowed to assume that one of the possibilities is "objectively true" in the classical sense even if the position isn't measured. On the other hand, even in quantum mechanics, it's still possible to strictly prove that the particle isn't found at both places simultaneously; the state vector is an eigenstate of the "both places" projection operator (product of two projection operators) with the eigenvalue zero. (The same comments apply to two slits in a double-slit experiment.) The mutually orthogonal terms contributing to the wave function or density matrix aren't multiple objects that simultaneously exist, as the word "AND" would indicate. You would need (tensor) products of Hilbert spaces and/or wave functions, not sums, to describe multiple objects! Instead, they are mutually excluding alternatives for what may exist, alternative properties that one physical system (e.g. one electron) may have. And mutually excluding alternatives simply cannot interact with each other, gravitationally or otherwise. Imagine you throw dice. The result may be "1" or "2" or "3" or "4" or "5" or "6". But you know that only one answer is right. There can't be any interaction that would say that because both "1" and "6" may occur, they attract each other which is why you probably get "3" or "4" in the middle. It's nonsense because "1" and "6" are never objects that simultaneously exist. If they don't simultaneously exist, they can't attract each other, whatever the rules are. They can't interact with one another at all! While the expectation value of the electron's position may be "somewhere in between" the regions "phi" and "chi", we may use the wave function to prove with absolute certainty that the electron isn't in between. The proponents of the "many-worlds interpretation" often commit the same trivial mistake. They are imagining that two copies of you co-exist at the same moment – in some larger "multiverse". That's why they often talk about one copy's thinking how the other copy is feeling in another part of a multiverse. But the other copy can't be feeling anything at all because it doesn't exist if you do! You and your copy are mutually excluding. If you wanted to describe two people, you would need a larger Hilbert space (a tensor product of two copies of the space for one person) and if you produced two people out of one, the evolution of the wave function would be quadratic i.e. nonlinear which would conflict with quantum mechanics (and its no-xerox theorem), too. These many-worlds apologists, including Brian Greene, often like to say (see e.g. The Hidden Reality) that the proper Copenhagen interpretation doesn't allow us to treat macroscopic objects by the very same rules of quantum mechanics with which the microscopic objects are treated and that's why they promote the many worlds. This proposition is what I call chutzpah. In reality, the claim that right after the measurement by one person, there suddenly exist several people is in a striking contradiction with facts that may be easily extracted from quantum mechanics applied to a system of people. The quantum mechanical laws – laws meticulously followed by the Copenhagen school, regardless of the size and context – still imply that the total mass is conserved, at least at a 1-kilogram precision, so it is simply impossible for one person to evolve into two. It's impossible because of the very same laws of quantum mechanics that, among many other things, protect Nature against the violation of charge conservation in nuclear processes. It's them, the many-worlds apologists, who are totally denying the validity of the laws of quantum mechanics for the macroscopic objects. In reality, quantum mechanics holds for all systems and for macroscopic objects, one may prove that classical physics is often a valid approximation, as the founding fathers of quantum mechanics knew and explicitly said. The validity of this approximation, as they also knew, is also a necessary condition for us to be able to make any "strict valid statements" of the classical type. The condition is hugely violated by interfering quantum microscopic (but, in principle, also large) objects before they are measured so one can't talk about the state of the system before the measurement in any classical language. In Nature, all observables (as well as the S-matrix and other evolution operators) are expressed by linear operators acting on the Hilbert space and Schrödinger's equation describing the evolution of any physical system has to be linear, too. Even if you use the density matrix, it evolves according to the "mixed Schrödinger equation" which is also linear:\[ i\hbar \ddfrac{}{t}\hat\rho = [\hat H(t),\hat \rho(t)]. \] It's extremely important that the density matrix \(\hat \rho\) enters linearly because \(\hat \rho\) is the quantum mechanical representation of the probability distribution, even the initial one. And the probabilities of final states are always linear combinations of the probabilities of the initial states. This claim follows from pure logic and will hold in any physical system, regardless of its laws. Why? Classically, the probabilities of final states \(P({\rm final}_j)\) are always given by\[ P({\rm final}_j) = \sum_{i=1}^N P({\rm initial}_i) P({\rm evolution}_{i\to j}) \] whose right hand side is linear in the probabilities of the initial states and the left hand side is linear in the probabilities of the final states. Regardless of the system, these dependences are simply linear. Quantum mechanics generalizes the probability distributions to the density matrices which admit states arising from superpositions (by having off-diagonal elements) and which are compatible with the non-zero commutators between generic observables. However, whenever your knowledge about a system may be described classically, the equation above strictly holds. It is pure maths; it is as questionable or unquestionable (make your guess) as \(2+2=4\). There isn't any "alternative probability calculus" in which the final probabilities would depend on the initial probabilities nonlinearly. If you carefully study the possible consistent algorithms to calculate the probabilities of various final outcomes or observations, you will find out that it is indeed the case that the quantum mechanical evolution still has to be linear in the density matrix. The Hossenfelder-Poisson equation fails to obey this condition so it violates totally basic rules of the probability calculus. Just to connect the density matrix discussion with a more widespread formalism, let us mention that quantum mechanics allows you to decompose any density matrix into a sum of terms arising from pure states,\[ \hat\rho = \sum_{k=1}^M p_k \ket{\psi_k}\bra{\psi_k} \] and it may study the individual terms, pure states, independently of others. When we do so, and we often do, we find out that the evolution of \(\ket\psi\), the pure states, has to be linear as well. The linear maps \(\ket\psi\to U\ket\psi\) produce \(\hat\rho\to U\hat\rho \hat U^\dagger\) for \(\hat\rho=\ket\psi\bra\psi\) which is still linear in the density matrix, as required. If you had a more general, nonlinear evolution – or if you represented observables by non-linear operators etc. – then these nonlinear rules for the wave function would get translated to nonlinear rules for the density matrix as well. And nonlinear rules for the density matrix would contradict some completely basic "linear" rules for probabilities that are completely independent of any properties of the laws of physics, such as\[ \] So the linearity of the evolution equations in the density matrix (and, consequently, also the linearity in the state vector which is a polotovar for the density matrix) is totally necessary for the internal consistency of a theory that predicts probabilities, whatever the internal rules that yield these probabilistic predictions are! That's why two pieces of the wave function (or the density matrix) can never attract each other or otherwise interact with each other. As long as they're orthogonal, they're mutually exclusive possibilities of what may happen. They can never be interpreted as objects that simultaneously exist at the same moment. The product of their probabilities (and anything that depends on its being nontrivial) is zero because at least one of them equals zero. And the wave functions and density matrix cannot be interpreted as classical objects because it's been proven, by the most rudimentary experiments, that these objects are probabilistic distributions or their polotovars rather than observables. These statements depend on no open questions at the cutting edge of the modern physics research; they're parts of the elementary undergraduate material that has been understood by active physicists since the mid 1920s. It now trivially follows that all the people who study Schrödinger-Newton equations are profoundly deluded, moronic crackpots. And that's the memo. Single mom: totally off-topic Totally off-topic. I had to click somewhere, not sure where (correction: e-mail tip from Tudor C.), and I was led to this "news article"; click to zoom in. Single mom Amy Livingston of Plzeň, 87, is making $14,000 a month. That's not bad. First of all, not every girl manages to become a mom at the age of 87. Second of all, it is impressive for a mom with such a name – who probably doesn't speak Czech at all – to survive in my hometown at all. Her having 12 times the average salary makes her achievements even more impressive. ;-) 1. Lubos, Your points are well taken: the Schrodinger-Newton equation is fundamentally flawed. Expanding on these issues, I'd like to know your views on the validity of: 1)the WKB approximation, 2)semiclassical gravity, 3)quantum chaos and quantization of classically chaotic dynamical systems? 2. Dear Ervin, thanks for your listening. All the entries in your systems are obviously legitimate and interesting approximations (1,2) or topics that may be studied (3). That doesn't mean that all people say correct things about them and use them properly, of course. ;-) The WKB approximation is just the "leading correction coming from quantum mechanics" to classical physics. Various simplified Ansaetze may be written down in various contexts. Semiclassical gravity either refers to general relativity with the first (one-loop) quantum corrections; or it represents the co-existence of quantized matter fields with non-quantized gravitational fields. This is only legitimate if the gravitational fields aren't affected by the matter fields - if the spacetime geometry solve the classical Einstein equations with sources that don't depend on the microscopic details of the matter fields and particles which are studied in the quantum framework. The matter fields propagate on a fixed classical background in this approximation but they don't affect the background by their detailed microstates. Indeed, if the dependence of the gravitational fields on the properties of the matter fields is substantial or important, there's no way to use the semiclassical approximation. Some people would evolve the gravitational fields according to the expectation values of the stress-energy tensor but that's the same mistake as discussed in this article in the context of the Poisson-Hossenfelder equation. Classical systems may be chaotic - unpredictable behavior very sensitive on initial conditions. Quantum chaos is about the research of the complicated wave functions etc. in systems that are analogous to (hatted) classically chaotic systems. 3. Thanks Lubos. I also take classical approximations with a grain of salt. For instance, mixing classical gravity with quantum behavior is almost always questionable a way or another. Here is a follow up question. What would you say if experiments on carefully prepared quantum systems could be carried out in highly accelerated frames of references? Could this be a reliable way of falsifying predictions of semiclassical gravity, for example?
3faa03c73d683286
Baylor University Department of Physics College of Arts and Sciences Baylor > Physics > News Physics News News Categories •  Baylor •  Colloquium •  Faculty Meetings •  Graduate •  Outreach •  Research Seminars •  Social Events •  SPS Top News •  Quantum computer makes first high-energy physics simulation •  Scientists have detected gravitational waves for the second time •  Surprise! The Universe Is Expanding Faster Than Scientists Thought •  Building Blocks of Life Found in Comet's Atmosphere •  Quantum cats here and there •  Silicon quantum computers take shape in Australia •  New Support for Alternative Quantum View •  Dark matter does not include certain axion-like particles •  Scientists discover new form of light •  How light is detected affects the atom that emits it •  AI learns and recreates Nobel-winning physics experiment •  Scientists Talk Privately About Creating a Synthetic Human Genome •  Boiling Water May Be Cause of Martian Streaks •  Physicists Abuzz About Possible New Particle as CERN Revs Up •  Gravitational lens reveals hiding dwarf dark galaxy •  Leonardo Da Vinci's Living Relatives Found •  Stephen Hawking: We Probably Won't Find Aliens Anytime Soon •  Measurement of Universe's expansion rate creates cosmological puzzle •  'Bizarre' Group of Distant Black Holes are Mysteriously Aligned •  Isaac Newton: handwritten recipe reveals fascination with alchemy •  Surprise! Gigantic Black Hole Found in Cosmic Backwater •  New Bizarre State of Matter Seems to Split Fundamental Particles •  Researchers made the smallest diode using a DNA molecule •  Astronomers Discover Colossal 'Super Spiral' Galaxies •  How a distant planet could have killed the dinosaurs •  Search for alien signals expands to 20,000 star systems •  New Tetraquark Particle Sparks Doubts •  7 Theories on the Origin of Life •  NIST Creates Fundamentally Accurate Quantum Thermometer •  DNA data storage could last thousands of years •  Hints of new LHC particle get slightly stronger •  Snake walk: The physics of slithering •  ET search: Look for the aliens looking for Earth •  Physicists create first photonic Maxwell's demon •  Black holes banish matter into cosmic voids •  How NASA's new telescope could unlock some mysteries of the universe •  5D Black Holes Could Break Relativity •  Reactor data hint at existence of fourth neutrino •  LIGO Discovers the Merger of Two Black Holes •  Einstein's gravitational waves 'seen' from black holes •  Earth-like Planets Have Earth-like Interiors •  Physicists find signs of four-neutron nucleus •  Have Gravitational Waves Finally Been Spotted? •  The telescope gets its first major upgrade in centuries •  Stephen Hawking: Black Holes Have 'Hair •  the Riemann Hypothesis has finally been SOLVED by a Nigerian professor •  Scientists struggle to stay grounded after possible gravitational wave signal •  NASA's Kepler Comes Roaring Back with 100 New Exoplanet Finds •  Physicists figure out how to retrieve information from a black hole •  New study asks: Why didn't the universe collapse? •  Physicists in Europe Find Tantalizing Hints of a Mysterious New Particl •  German physicists see landmark in nuclear fusion quest •  Controversial experiment sees no evidence that the universe is a hologram •  LISA Pathfinder Heads to Space •  Scientists Create New Kind Of Diamond At Room Temperature •  Japanese scientists create touchable holograms •  Positrons Are Plentiful In Ultra-Intense Laser Blasts •  Scientists Link Moon’s Tilt and Earth’s Gold •  A Century Ago, Einstein’s Theory of Relativity Changed Everything •  Is Earth Growing a Hairy Dark Matter 'Beard'? •  Scientists caught a new planet forming for the first time ever •  Experiment records extreme quantum weirdness •  Scientists look into hydrogen atom, find old recipe for pi •  Strong forces make antimatter stick •  Birth of universe modeled in massive data simulation •  Modern Mystery: Ancient Comet Is Spewing Oxygen •  Ingredients for Life Were Always Present on Earth, Comet Suggests •  Life May Have Begun 4.1 Billion Years Ago on an Infant Earth •  Earth Bloomed Early: A Fermi Paradox Solution? •  Perfectly accurate clocks turn out to be impossible •  Our Universe: It's the 'Simplest' Thing We Know •  Baylor Physicist Appointed to Management Team of Major Scientific Experiment at CERN •  They're Out There! Most People Believe in E.T. Quantum computer makes first high-energy physics simulation Coaxing qubits Four qubits constitute a rudimentary quantum computer; the fabled applications of future quantum computers, such as for breaking down huge numbers into prime factors, will require hundreds of qubits and complex error-correction codes. But for physical simulations, which can tolerate small margins of error, 30 to 40 qubits could already be useful, Martinez says. John Chiaverini, a physicist who works on quantum computing at the Massachusetts Institute of Technology in Cambridge, says that the experiment might be difficult to scale up without significant modifications. The linear arrangement of ions in the trap, he says, is “particularly limiting for attacking problems of a reasonable scale”. Muschik says that her team is already making plans to use two-dimensional configurations of ions. Are we there yet? “We are not yet there where we can answer questions we can’t answer with classical computers,” Martinez says, “but this is a first step in that direction.” Quantum computers are not strictly necessary for understanding the electromagnetic force. However, the researchers hope to scale up their techniques so that they can simulate the strong nuclear force. This may take years, Muschik says, and will require not only breakthroughs in hardware, but also the development of new quantum algorithms. These scaled-up quantum computers could help in understanding what happens during the high-speed collision of two atomic nuclei, for instance. Faced with such a problem, classical computer simulations just fall apart, says Andreas Kronfeld, a theoretical physicist who works on simulations of the strong nuclear force at the Fermi National Accelerator Laboratory (Fermilab) near Chicago, Illinois. Another example, he says, is understanding neutron stars. Researchers think that these compact celestial objects consist of densely packed neutrons, but they’re not sure. They also don’t know the state of matter in which those neutrons would exist. E.T. Phones Earth? 1,500 Years Until Contact, Experts Estimate "Communicating with anybody is an incredibly slow, long-duration endeavor," said Evan Solomonides at a press conference June 14 at the American Astronomical Society's summer meeting in San Diego, California. Solomonides is an undergraduate student at Cornell University in New York, where he worked with Cornell radio astronomer Yervant Terzian to explore the mystery of the Fermi paradox: If life is abundant in the universe, the argument goes, it should have contacted Earth, yet there's no definitive sign of such an interaction. Scientists have detected gravitational waves for the second time Scientists with the LIGO collaboration claim they have once again detected gravitational waves — the ripples in space-time produced by objects moving throughout the Universe. It’s the second time these researchers have picked up gravitational wave signals, after becoming the first team in history to do so earlier this year. “A black hole has no hair.” That mysterious, koan-like statement by the theorist and legendary phrasemaker John Archibald Wheeler of Princeton has stood for half a century as one of the brute pillars of modern physics. It describes the ability of nature, according to classical gravitational equations, to obliterate most of the attributes and properties of anything that falls into a black hole, playing havoc with science’s ability to predict the future and tearing at our understanding of how the universe works. Now it seems that statement might be wrong. Recently Stephen Hawking, who has spent his entire career battling a form of Lou Gehrig’s disease, wheeled across the stage in Harvard’s hoary, wood-paneled Sanders Theater to do battle with the black hole. It is one of the most fearsome demons ever conjured by science, and one partly of his own making: a cosmic pit so deep and dense and endless that it was long thought that nothing — not even light, not even a thought — could ever escape. But Dr. Hawking was there to tell us not to be so afraid. In a paper to be published this week in Physical Review Letters, Dr. Hawking and his colleagues Andrew Strominger of Harvard and Malcolm Perry of Cambridge University in England say they have found a clue pointing the way out of black holes. Black holes are the most ominous prediction of Einstein’s general theory of relativity: Too much matter or energy concentrated in one place would cause space to give way, swallowing everything inside like a magician’s cloak. An eternal prison was the only metaphor scientists had for these monsters until 40 years ago, when Dr. Hawking turned black holes upside down — or perhaps inside out. His equations showed that black holes would not last forever. Over time, they would “leak” and then explode in a fountain of radiation and particles. Ever since, the burning question in physics has been: When the black hole finally goes, does it give up the secrets of everything that fell in? Dr. Hawking’s calculation was, and remains, hailed as a breakthrough in understanding the connection between gravity and quantum mechanics, between the fabric of space and the subatomic particles that live inside it — the large and the small in the universe. But there was a hitch. By Dr. Hawking’s estimation, the radiation coming out of the black hole as it fell apart would be random. As a result, most of the “information” about what had fallen in — all of the attributes and properties of the things sucked in, whether elephants or donkeys, Volkswagens or Cadillacs — would be erased. In a riposte to Einstein’s famous remark that God does not play dice, Dr. Hawking said in 1976, “God not only plays dice with the universe, but sometimes throws them where they can’t be seen.” But his calculation violated a tenet of modern physics: that it is always possible in theory to reverse time, run the proverbial film backward and reconstruct what happened in, say, the collision of two cars or the collapse of a dead star into a black hole. The universe, like a kind of supercomputer, is supposed to be able to keep track of whether one car was a green pickup truck and the other was a red Porsche, or whether one was made of matter and the other antimatter. These things may be destroyed, but their “information” — their essential physical attributes — should live forever. In fact, the information seemed to be lost in the black hole, according to Dr. Hawking, as if part of the universe’s memory chip had been erased. According to this theorem, only information about the mass, charge and angular momentum of what went in would survive. Nothing about whether it was antimatter or matter, male or female, sweet or sour. A war of words and ideas ensued. The information paradox, as it is known, was no abstruse debate, as Dr. Hawking pointed out from the stage of the Sanders Theater in April. Rather, it challenged foundational beliefs about what reality is and how it works. If the rules break down in black holes, they may be lost in other places as well, he warned. If foundational information disappears into a gaping maw, the notion of a “past” itself may be in jeopardy — we couldn’t even be sure of our own histories. Our memories could be illusions. “It’s the past that tells us who we are. Without it we lose our identity,” he said. Fortunately for historians, Dr. Hawking conceded defeat in the black hole information debate 10 years ago, admitting that advances in string theory, the so-called theory of everything, had left no room in the universe for information loss. At least in principle, then, he agreed, information is always preserved — even in the smoke and ashes when you, say, burn a book. With the right calculations, you should be able reconstruct the patterns of ink, the text. Dr. Hawking paid off a bet with John Preskill, a Caltech physicist, with a baseball encyclopedia, from which information can be easily retrieved. But neither Dr. Hawking nor anybody else was able to come up with a convincing explanation for how that happens and how all this “information” escapes from the deadly erasing clutches of a black hole. Indeed, a group of physicists four years ago tried to figure it out and suggested controversially that there might be a firewall of energy just inside a black hole that stops anything from getting out or even into a black hole. The new results do not address that issue. But they do undermine the famous notion that black holes have “no hair” — that they are shorn of the essential properties of the things they have consumed. About four years ago, Dr. Strominger started noodling around with theoretical studies about gravity dating to the early 1960s. Interpreted in a modern light, the papers — published in 1962 by Hermann Bondi, M. G. J. van der Burg, A. W. K. Metzner and Rainer Sachs, and in 1965 by Steven Weinberg, later a recipient of the Nobel Prize — suggested that gravity was not as ruthless as Dr. Wheeler had said. Looked at from the right vantage point, black holes might not be not be bald at all. The right vantage point is not from a great distance in space — the normal assumption in theoretical calculations — but from a far distance in time, the far future, technically known as “null infinity.” “Null infinity is where light rays go if they are not trapped in a black hole,” Dr. Strominger tried to explain over coffee in Harvard Square recently.From this point of view, you can think of light rays on the surface of a black hole as a bundle of straws all pointing outward, trying to fly away at the speed of, of course, light. Because of the black hole’s immense gravity, they are stuck. But the individual straws can slide inward or outward along their futile tracks, slightly advancing or falling back, under the influence of incoming material. When a particle falls into a black hole, it slides the straws of light back and forth, a process called a supertranslation. That leaves a telltale pattern on the horizon, the invisible boundary that is the point of no return of a black hole — a halo of “soft hair,” as Dr. Strominger and his colleagues put it. That pattern, like the pixels on your iPhone or the wavy grooves in a vinyl record, contains information about what has passed through the horizon and disappeared. “One often hears that black holes have no hair,” Dr. Strominger and a postdoctoral researcher, Alexander Zhiboedov, wrote in a 2014 paper. Not true: “Black holes have a lush infinite head of supertranslation hair.” Enter Dr. Hawking. For years, he and Dr. Strominger and a few others had gotten together to work in seclusion at a Texas ranch owned by the oilman and fracking pioneer George P. Mitchell. Because Dr. Hawking was discouraged from flying, in April 2014 the retreat was in Hereford, Britain. It was there that Dr. Hawking first heard about soft hair — and was very excited. He, Dr. Strominger and Dr. Perry began working together. In Stockholm that fall, he made a splash when he announced that a resolution to the information paradox was at hand — somewhat to the surprise of Dr. Strominger and Dr. Perry, who has been trying to maintain an understated stance. Although information gets hopelessly scrambled, Dr. Hawking declared, it “can be recovered in principle, but it is lost for all practical purposes.” In January, Dr. Hawking, Dr. Strominger and Dr. Perry posted a paper online titled “Soft Hair on Black Holes,” laying out the basic principles of their idea. In the paper, they are at pains to admit that knocking the pins out from under the no-hair theorem is a far cry from solving the information paradox. But it is progress. Their work suggests that science has been missing something fundamental about how black holes evaporate, Dr. Strominger said. And now they can sharpen their questions. “I hope we have the tiger by the tail,” he said. Whether or not soft hair is enough to resolve the information paradox, nobody really knows. Reaction from other physicists has been reserved. Surprise! The Universe Is Expanding Faster Than Scientists Thought "This surprising finding may be an important clue to understanding those mysterious parts of the universe that make up 95 percent of everything and don't emit light, such as dark energy, dark matter and dark radiation," study leader Adam Riess, an astrophysicist at the Space Telescope Science Institute and Johns Hopkins University in Baltimore, said in a statement. Riess — who shared the 2011 Nobel Prize in physics for the discovery that the universe's expansion is accelerating — and his colleagues used NASA's Hubble Space Telescope to study 2,400 Cepheid stars and 300 Type Ia supernovas. Building Blocks of Life Found in Comet's Atmosphere For the first time, scientists have directly detected a crucial amino acid and a rich selection of organic molecules in the dusty atmosphere of a comet, further bolstering the hypothesis that these icy objects delivered some of life's ingredients to Earth. The amino acid glycine, along with some of its precursor organic molecules and the essential element phosphorus, were spotted in the cloud of gas and dust surrounding Comet 67P/Churyumov-Gerasimenko by the Rosetta spacecraft, which has been orbiting the comet since 2014. While glycine had previously been extracted from cometary dust samples that were brought to Earth by NASA's Stardust mission, this is the first time that the compound has been detected in space, naturally vaporized. The discovery of those building blocks around a comet supports the idea that comets could have played an essential role in the development of life on early Earth, researchers said. Quantum cats here and there The story of Schrödinger's cat being hidden away in a box and being both dead and alive is often invoked to illustrate the how peculiar the quantum world can be. On a twist of the dead/alive behavior, Wang et al. now show that the cat can be in two separate locations at the same time. Constructing their cat from coherent microwave photons, they show that the state of the “electromagnetic cat” can be shared by two separated cavities. Going beyond common-sense absurdities of the classical world, the ability to share quantum states in different locations could be a powerful resource for quantum information processing. Planet 1,200 Light-years Away Is A Good Prospect For Habitability A distant planet known as Kepler-62f could be habitable, a team of astronomers reports. The planet, which is about 1,200 light-years from Earth in the direction of the constellation Lyra, is approximately 40 percent larger than Earth. At that size, Kepler-62f is within the range of planets that are likely to be rocky and possibly could have oceans, said Aomawa Shields, the study's lead author and a National Science Foundation astronomy and astrophysics postdoctoral fellow in UCLA's department of physics and astronomy. NASA's Kepler mission discovered the planetary system that includes Kepler-62f in 2013, and it identified Kepler-62f as the outermost of five planets orbiting a star that is smaller and cooler than the sun. But the mission didn't produce information about Kepler-62f's composition or atmosphere or the shape of its orbit. Shields collaborated on the study with astronomers Rory Barnes, Eric Agol, Benjamin Charnay, Cecilia Bitz and Victoria Meadows, all of the University of Washington, where Shields earned her doctorate. To determine whether the planet could sustain life, the team came up with possible scenarios about what its atmosphere might be like and what the shape of its orbit might be. "We found there are multiple atmospheric compositions that allow it to be warm enough to have surface liquid water," said Shields, a University of California President's Postdoctoral Program Fellow. "This makes it a strong candidate for a habitable planet." Has a Hungarian Physics Lab Found a Fifth Force of Nature? A laboratory experiment in Hungary has spotted an anomaly in radioactive decay that could be the signature of a previously unknown fifth fundamental force of nature, physicists say—if the finding holds up. Attila Krasznahorkay at the Hungarian Academy of Sciences’s Institute for Nuclear Research in Debrecen, Hungary, and his colleagues reported their surprising result in 2015 on the arXiv preprint server, and this January in the journal Physical Review Letters. But the report – which posited the existence of a new, light boson only 34 times heavier than the electron—was largely overlooked. Then, on April 25, a group of US theoretical physicists brought the finding to wider attention by publishing its own analysis of the result on arXiv. The theorists showed that the data didn’t conflict with any previous experiments—and concluded that it could be evidence for a fifth fundamental force. “We brought it out from relative obscurity,” says Jonathan Feng, at the University of California, Irvine, the lead author of the arXiv report. Silicon quantum computers take shape in Australia Silicon is at the heart of the multibillion-dollar computing industry. Now, efforts to harness the element to build a quantum processor are taking off, thanks to elegant designs from an Australian collaboration. In July, the Centre for Quantum Computation and Communication Technology, which is based at the University of New South Wales (UNSW) in Sydney, will receive the first instalment of a Aus$46-million (US$33-million) investment. The money comes from government and industry sources whose goal is to create a practical quantum computer. At an innovation forum in London on 6 May, hosted by Nature and start-up accelerator Entrepreneur First, two physicists from a group at the UNSW pitched a plan to reach that goal. Their audience was a panel of entrepreneurs and scientists, who critiqued ideas for commercializing a range of quantum technologies, including sensors, computer security and a quantum internet as well as quantum computers. So far, the UNSW team has demonstrated a system with quantum bits, or qubits, only in a single atom. Useful computations will require linking qubits in multiple atoms. But the team’s silicon qubits hold their quantum state nearly a million times longer than do systems made from superconducting circuits, a leading alternative, UNSW physicist Guilherme Tosi told participants at the event. This helps the silicon qubits to perform operations with one-sixth of the errors of superconducting circuits. If the team can pull off this low error rate in a larger system, it would be “quite amazing”, said Hartmut Neven, director of engineering at Google and a member of the panel. But he cautioned that in terms of performance, the system is far behind others. The team is aiming for ten qubits in five years, but both Google and IBM are already approaching this with superconducting systems. And in five years, Google plans to have ramped up to hundreds of qubits. New Support for Alternative Quantum View Of the many counterintuitive features of quantum mechanics, perhaps the most challenging to our notions of common sense is that particles do not have locations until they are observed. This is exactly what the standard view of quantum mechanics, often called the Copenhagen interpretation, asks us to believe. Instead of the clear-cut positions and movements of Newtonian physics, we have a cloud of probabilities described by a mathematical structure known as a wave function. The wave function, meanwhile, evolves over time, its evolution governed by precise rules codified in something called the Schrödinger equation. The mathematics are clear enough; the actual whereabouts of particles, less so. Until a particle is observed, an act that causes the wave function to “collapse,” we can say nothing about its location. Albert Einstein, among others, objected to this idea. As his biographer Abraham Pais wrote: “We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it.” But there’s another view — one that’s been around for almost a century — in which particles really do have precise positions at all times. This alternative view, known as pilot-wave theory or Bohmian mechanics, never became as popular as the Copenhagen view, in part because Bohmian mechanics implies that the world must be strange in other ways. In particular, a 1992 study claimed to crystalize certain bizarre consequences of Bohmian mechanics and in doing so deal it a fatal conceptual blow. The authors of that paper concluded that a particle following the laws of Bohmian mechanics would end up taking a trajectory that was so unphysical — even by the warped standards of quantum theory — that they described it as “surreal.” Nearly a quarter-century later, a group of scientists has carried out an experiment in a Toronto laboratory that aims to test this idea. And if their results, first reported earlier this year, hold up to scrutiny, the Bohmian view of quantum mechanics — less fuzzy but in some ways more strange than the traditional view — may be poised for a comeback. Dark matter does not include certain axion-like particles Scientists believe 80 percent of the universe is made up of dark matter. What exactly constitutes dark matter? Scientists still aren't sure. A new study, published this week in the journal Physical Review Letters, grows the list of particles not found in dark matter. Astronomers have previously hypothesized that axion-like particles, or ALPs, might make up dark matter. Given their diminutive size -- registering at a billionth the mass of a single electron -- it was a logical guess. But when researchers at Stockholm University used NASA's gamma-ray telescope on the Fermi satellite to look for ALPs in the Perseus galaxy cluster, they came up empty-handed. ALPs can be briefly transformed into light-emitting matter when they travel through intense electromagnetic fields. Likewise, light particles like gamma radiation can briefly transform into ALPs. No such transformations, however, were detected near the center of the Perseus cluster. While the research didn't offer any revelations on the makeup of dark matter, scientists believe they can now exclude certain types of ALPs in the ongoing search for the elusive matter. "The ALPs we have been able to exclude could explain a certain amount of dark matter," Manuel Meyer, a physicist at Stockholm University, said in a news release. "What is particularly interesting is that with our analysis we are reaching a sensitivity that we thought could only be obtained with dedicated future experiments on Earth." So, the hunt for dark matter details continues. Scientists discover new form of light "For a beam of light, although traveling in a straight line it can also be rotating around its own axis," John Donegan, a professor at Trinity College Dublin's School of Physics, explained in a news release. "So when light from the mirror hits your eye in the morning, every photon twists your eye a little, one way or another." "Our discovery will have real impacts for the study of light waves in areas such as secure optical communications," Donegan added. Researchers made their discovery after passing light through special crystals to create a light beam with a hollow, screw-like structure. Using quantum mechanics, the physicists theorized that the beam's twisting photons were being slowed to a half-integer of Planck's constant. The team of researchers then designed a device to measure the beam's angular momentum as it passed through the crystal. As they had predicted, they registered a shift in the flow of photons caused by quantum effects. The researchers described their discovery in a paper published this week in the journal Science Advances. "What I think is so exciting about this result is that even this fundamental property of light, that physicists have always thought was fixed, can be changed," concluded Paul Eastham, assistant professor of physics at Trinity. How light is detected affects the atom that emits it Flick a switch on a dark winter day and your office is flooded with bright light, one of many everyday miracles to which we are all usually oblivious. A physicist would probably describe what is happening in terms of the particle nature of light. An atom or molecule in the fluorescent tube that is in an excited state spontaneously decays to a lower energy state, releasing a particle called a photon. When the photon enters your eye, something similar happens but in reverse. The photon is absorbed by a molecule in the retina and its energy kicks that molecule into an excited state. Light is both a particle and a wave, and this duality is fundamental to the physics that rule the Lilliputian world of atoms and molecules. Yet it would seem that in this case the wave nature of light can be safely ignored. Kater Murch, assistant professor of physics in Arts and Sciences at Washington University in St. Louis, might give you an argument about that. His lab is one of the first in the world to look at spontaneous emission with an instrument sensitive to the wave rather than the particle nature of light, work described in the May 20th issue of Nature Communications. His experimental instrument consists of an artificial atom (actually a superconducting circuit with two states, or energy levels) and an interferometer, in which the electromagnetic wave of the emitted light interferes with a reference wave of the same frequency. This manner of detection turns everything upside down, he said. All that a photon detector can tell you about spontaneous emission is whether an atom is in its excited state or its ground state. But the interferometer catches the atom diffusing through a quantum "state space" made up of all the possible combinations, or superpositions, of its two energy states. This is actually trickier than it sounds because the scientists are tracking a very faint signal (the electromagnetic field associated with one photon), and most of what they see in the interference pattern is quantum noise. But the noise carries complementary information about the state of the artificial atom that allows them to chart its evolution. When viewed in this way, the artificial atom can move from a lower energy state to a higher energy one even as its follows the inevitable downward trajectory to the ground state. "You'd never see that if you were detecting photons," Murch said. So different detectors see spontaneous emission very differently. "By looking at the wave nature of light, we are able see this lovely diffusive evolution between the states," Murch said. But it gets stranger. The fact that an atom's average excitation can increase even when it decays is a sign that how we look at light might give us some control over the atoms that emitted the light, Murch said. This might sound like a reversal of cause and effect, with the effect pushing on the cause. It is possible only because of one of the weirdest of all the quantum effects: When an atom emits light, quantum physics requires the light and the atom to become connected, or entangled, so that measuring a property of one instantly reveals the value of that property for the other, no matter how far away it is. Or put another way, every measurement of an entangled object perturbs its entangled partner. It is this quantum back-action, Murch said, that could potentially allow a light detector to control the light emitter. "Quantum control has been a dream for many years," Murch said. "One day, we may use it to enhance fluorescence imaging by detecting the light in a way that creates superpositions in the emitters. "That's very long term, but that's the idea," he said. AI learns and recreates Nobel-winning physics experiment Scientists Talk Privately About Creating a Synthetic Human Genome Scientists are now contemplating the fabrication of a human genome, meaning they would use chemicals to manufacture all the DNA contained in human chromosomes. The prospect is spurring both intrigue and concern in the life sciences community because it might be possible, such as through cloning, to use a synthetic genome to create human beings without biological parents. While the project is still in the idea phase, and also involves efforts to improve DNA synthesis in general, it was discussed at a closed-door meeting on Tuesday at Harvard Medical School in Boston. The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting. Organizers said the project could have a big scientific payoff and would be a follow-up to the original Human Genome Project, which was aimed at reading the sequence of the three billion chemical letters in the DNA blueprint of human life. The new project, by contrast, would involve not reading, but rather writing the human genome — synthesizing all three billion units from chemicals. But such an attempt would raise numerous ethical issues. Could scientists create humans with certain kinds of traits, perhaps people born and bred to be soldiers? Or might it be possible to make copies of specific people? “Would it be O.K., for example, to sequence and then synthesize Einstein’s genome?” Drew Endy, a bioengineer at Stanford, and Laurie Zoloth, a bioethicist at Northwestern University, wrote in an essay criticizing the proposed project. “If so how many Einstein genomes should be made and installed in cells, and who would get to make them?” Dr. Endy, though invited, said he deliberately did not attend the meeting at Harvard because it was not being opened to enough people and was not giving enough thought to the ethical implications of the work. Continue reading the main story Scientists Seek Moratorium on Edits to Human Genome That Could Be Inherited DEC. 3, 2015 British Researcher Gets Permission to Edit Genes of Human Embryos FEB. 1, 2016 George Church, a professor of genetics at Harvard Medical School and an organizer of the proposed project, said there had been a misunderstanding. The project was not aimed at creating people, just cells, and would not be restricted to human genomes, he said. Rather it would aim to improve the ability to synthesize DNA in general, which could be applied to various animals, plants and microbes. “They’re painting a picture which I don’t think represents the project,” Dr. Church said in an interview. He said the meeting was closed to the news media, and people were asked not to tweet because the project organizers, in an attempt to be transparent, had submitted a paper to a scientific journal. They were therefore not supposed to discuss the idea publicly before publication. He and other organizers said ethical aspects have been amply discussed since the beginning. Boiling Water May Be Cause of Martian Streaks The results of Earth-bound lab experiments appear to back up the theory that dark lines on Martian slopes are created by water — though in an otherworldly manner, scientists said Monday.A team from France, Britain and the United States constructed models and simulated Mars conditions to follow up on a 2015 study which proffered “the strongest evidence yet” for liquid water — a prerequisite for life — on the Red Planet. That finding had left many scientists scratching their heads as the low pressure of Mars’ atmosphere means that water does not survive long in liquid form. It either boils or freezes. An international team of astronomers has discovered three Earth-like exoplanets orbiting an ultra-cool dwarf star—the smallest and dimmest stars in the Galaxy—now known as TRAPPIST-1. The discovery, made with the TRAPPIST telescope at ESO's La Silla Observatory, is significant not only because the three planets have similar properties to Earth, suggesting they could harbor life, but also because they are relatively close (just 40 light years away) and they are the first planets ever discovered orbiting such a dim star. A research paper detailing the teams findings was published today in the journal Nature. "What is super exciting is that for the first time, we have extrasolar worlds similar in size and temperature to Earth—planets that could thus, in theory, harbor liquid water and host life on at least a part of their surfaces—for which the atmospheric composition can be studied in detail with current technology," lead researcher Michaël Gillon of the University of Liège in Belgium said in an email to Popular Mechanics. The real reasons nothing can go faster than the speed of light We are told that nothing can travel faster than light. This is how we know it is true Physicists Abuzz About Possible New Particle as CERN Revs Up Scientists around the globe are revved up with excitement as the world's biggest atom smasher — best known for revealing the Higgs boson four years ago — starts whirring again to churn out data that may confirm cautious hints of an entirely new particle. ne of Stephen Hawking's most brilliant and disturbing theories may have been confirmed by a scientist who created a sound “black hole” in his laboratory, potentially paving the way for a Nobel Prize. Research by Professor Hawking, a cosmologist at Cambridge University, disputes the notion that black holes are a gravitational sinkhole, pulling in matter and never allowing anything to escape, even light. His model, developed in the 1970s, instead suggested that black holes could actually emit tiny particles, allowing energy to escape. If true, it would mean some black holes could simply evaporate completely with profound implications for our understanding of the universe.But such is the weakness of the emitted particle combined with the remoteness of even the nearest of black holes, his mathematical discovery has yet to be verified by observation. Instead Jeff Steinhauer, professor of physics at the Technion university in Haifa, created something analagous to a “black hole” for sound in his laboratory. In a paper published on the physics website arXiv, and reported by The Times, he described how he cooled helium to close to absolute zero before manipulating it in such a way that sound could not cross it, like a black hole's event horizon. He said he found evidence that phonons – the sound equivalent of light's photons - were leaking out, rather as Prof Hawking had predicted for black holes.The results have yet to be replicated elsewhere and scientists say they will want to check the effect is not caused by another factor. If confirmed, it would strengthen Prof Hawking's case for science's greatest prize. Although his theory has a lot of support, Nobel Prizes for Physics are not awarded without experimental proof. Earlier this year, Prof Hawking used the BBC's Reith Lecture to make the case that his work was close to being proven, both in the laboratory and from echoes of the very earliest moments of our universe.“I am resigned to the fact that I won’t see proof of Hawking radiation directly. “There are solid state analogues of black holes and other effects, that the Nobel committee might accept as proof,” he said. “But there’s another kind of Hawking radiation, coming from the cosmological event horizon of the early inflationary universe. “I am now studying whether one might detect Hawking radiation in primordial gravitational waves . . . so I might get a Nobel prize after all.” Gravitational lens reveals hiding dwarf dark galaxy Originally, scientists were simply trying to capture an image of the gravitational lens SDP.81 using the Atacama Large Millimeter Array. Their efforts were part of a 2014 survey aimed at testing ALMA's new, high-resolution capabilities. More than a year later, however, the image revealed a surprise -- a dwarf dark galaxy hiding in the halo of a larger galaxy, positioned some 4 billion light-years from Earth.A gravitational lens, or gravitational lensing, is a phenomenon whereby the gravity of a closer galaxy bends the light of a more distant galaxy, creating a magnifying lens-like effect. The phenomenon is often used to study galaxies that would otherwise be too far away to see. Astronomers initially assumed SDP.81 revealed the light of two galaxies -- that of a more distant galaxy, 12 billion light-years away, and that of the a closer galaxy, 4 billion light-years away. But new analysis of the image by researchers at Stanford University has revealed evidence of a dwarf dark galaxy. "We can find these invisible objects in the same way that you can see rain droplets on a window. You know they are there because they distort the image of the background objects," astronomer Yashar Hezaveh explained in a news release. The gravitational influence of dark matter distorted the light bending through the gravitational lens. Hezaveh and his colleagues recruited the power of several supercomputers to scan the radio telescope data for anomalies within the halo of SDP.81. They succeeded in identifying a unique clump of distortion, less than one-thousandth the mass of the Milky Way. The work may pave the way for the discovery of more collections of dark matter and also solve a discrepancy that's long plagued cosmologists and astronomers. Leonardo Da Vinci's Living Relatives Found Leonardo da Vinci lives on, according to two Italian researchers who have tracked down the living relatives of the Renaissance genius. It was believed that no traces were left of the painter, engineer, mathematician, philosopher and naturalist. The remains of Leonardo, who died in 1519 in Amboise, France, were dispersed in the 16th century during religious wars. But according to historian Agnese Sabato and art historian Alessandro Vezzosi, director of the Museo Ideale in the Tuscan town of Vinci, where the artist was born in 1452, Da Vinci's family did not go extinct. Stephen Hawking: We Probably Won't Find Aliens Anytime Soon Hawking made the prediction yesterday (April 12) during the Breakthrough Starshot announcement in New York City. At the news conference, Hawking, along with Russian billionaire investor Yuri Milner and a group of scientists, detailed a new project that aims to send a multitude of tiny, wafer-size spaceships into space to the neighboring star system Alpha Centauri. If these tiny spaceships travel at 20 percent the speed of light, they'll be able to reach Alpha Centauri in just 20 years, Milner said. Once there, the spacecraft will be able to do a 1-hour flyby of Alpha Centauri and collect data that's impossible to gather from Earth, such as taking close-up photos of the star system, probing space dust molecules and measuring magnetic fields, said Avi Loeb, chairman of the Breakthrough Starshot Advisory Committee and a professor of science at Harvard University. Measurement of Universe's expansion rate creates cosmological puzzle The most precise measurement ever made of the current rate of expansion of the Universe has produced a value that appears incompatible with measurements of radiation left over from the Big Bang1. If the findings are confirmed by independent techniques, the laws of cosmology might have to be rewritten.This might even mean that dark energy — the unknown force that is thought to be responsible for the observed acceleration of the expansion of the Universe — has increased in strength since the dawn of time. “I think that there is something in the standard cosmological model that we don't understand,” says astrophysicist Adam Riess, a physicist at Johns Hopkins University in Baltimore, Maryland, who co-discovered dark energy in 1998 and led the latest study. Kevork Abazajian, a cosmologist at the University of California, Irvine, who was not involved in the study, says that the results have the potential of “becoming transformational in cosmology”. Stephen Hawking Helps Launch Project 'Starshot' for Interstellar Space Exploration The famed cosmologist, along with a group of scientists and billionaire investor Yuri Milner, unveiled an ambitious new $100 million project today (April 12) called Breakthrough Starshot, which aims to build the prototype for a tiny, light-propelled robotic spacecraft that could visit the nearby star Alpha Centauri after a journey of just 20 years. "The limit that confronts us now is the great void between us and the stars, but now we can transcend it," Hawking said today during a news conference here at One World Observatory. 'Bizarre' Group of Distant Black Holes are Mysteriously Aligned A highly sensitive radio telescope has seen something peculiar in the depths of our cosmos: A group of supermassive black holes are mysteriously aligned, as if captured in a synchronized dance.These black holes, which occupy the centers of galaxies in a region of space called ELAIS-N1, appear to have no relation to one another, separated by millions of light-years. But after studying the radio waves generated by the twin jets blasting from the black holes’ poles, astronomers using data from the Giant Metrewave Radio Telescope (GMRT) in India realized that all the jets were pointed in the same direction, like arrows on compasses all pointing “north.”This is the first time a group of supermassive black holes in galactic cores have been seen to share this bizarre relationship and, at first glance, the occurrence should be impossible. What we are witnessing is a cluster of galaxies, that all have central supermassive black holes that have their axes of rotation pointed in the same direction. “Since these black holes don’t know about each other, or have any way of exchanging information or influencing each other directly over such vast scales, this spin alignment must have occurred during the formation of the galaxies in the early universe,” said Andrew Russ Taylor, director of the Inter-University Institute for Data Intensive Astronomy in Cape Town, South Africa. Taylor is lead author of the study published in the journal Monthly Notices of the Royal Astronomical Society. Isaac Newton: handwritten recipe reveals fascination with alchemy A 17th-century recipe written by Isaac Newton is now going online, revealing more about the physicist’s relationship with the ancient science of alchemy. Calling for ingredients such as "one part Fiery Dragon" and "at least seven Eagles of mercury," the handwritten recipe describes how to make "sophick mercury," seen at the time as an essential element in creating the "philosopher’s stone," a fabled substance with the power to turn base metals, like lead, into gold. The manuscript, which is written in Latin and English, was acquired in February by Philadelphia-based nonprofit the Chemical Heritage Foundation, National Geographic reports. The foundation is now working to upload digital images and transcriptions of the text to an online database. Scientists may not have been able to spot the proposed ninth planet in our solar system, or even confirm that it exists, but that hasn't stopped them from imagining how it looks. Astrophysicists from the University of Bern recently showed off a new model of the possible evolution of Planet Nine, a planet hypothesized to explain the movement bodies at our solar system's edge. Published in the Journal Astronomy and Astrophysics, the model shows the possible size, temperature, and brightness of the mysterious planet. New research suggests that the sugar ribose -- the "R" in RNA -- is probably found in comets and asteroids that zip through the solar system and may be more abundant throughout the universe than was previously thought. The finding has implications not just for the study of the origins of life on Earth, but also for understanding how much life there might be beyond our planet. Scientists already knew that several of the molecules necessary for life including amino acids, nucleobases and others can be made from the interaction of cometary ices and space radiation. But ribose, which makes up the backbone of the RNA molecule, had been elusive -- until now. The new work, published Thursday in Science, fills in another piece of the puzzle, said Andrew Mattioda, an astrochemist at NASA Ames Research Center, who was not involved with the study. Surprise! Gigantic Black Hole Found in Cosmic Backwater New Bizarre State of Matter Seems to Split Fundamental Particles A bizarre new state of matter has been discovered — one in which electrons that usually are indivisible seem to break apart. The new state of matter, which had been predicted but never spotted in real life before, forms when the electrons in an exotic material enter into a type of "quantum dance," in which the spins of the electrons interact in a particular way, said Arnab Banerjee, a physicist at Oak Ridge National Laboratory in Tennessee. The findings could pave the way for better quantum computers, Banerjee said. Is Mysterious 'Planet Nine' Tugging on NASA Saturn Probe? Just this month, evidence from the Cassini spacecraft orbiting Saturn helped close in on the missing planet. Many experts suspect that within as little as a year someone will spot the unseen world, which would be a monumental discovery that changes the way we view our solar system and our place in the cosmos. "Evidence is mounting that something unusual is out there — there's a story that's hard to explain with just the standard picture," says David Gerdes, a cosmologist at the Universityof Michigan who never expected to find himself working on Planet Nine. He is just one of many scientists who leapt at the chance to prove — or disprove — the team's careful calculations. Researchers made the smallest diode using a DNA molecule The study could lead to nanoscale electronic components and devices. A team of researchers from the University of Georgia and Ben-Gurion University has developed an electronic component so tiny, you can't even see it under an ordinary microscope. See, the team used a single DNA molecule to create a diode, a component that conducts electricity mostly in one direction. Further, the DNA molecule they designed for the study only has 11 base pairs. That makes it a pretty short helix, considering a human genome has approximately 3 billion pairs. To allow a current to flow through the DNA, the team inserted a molecule called "coralyne" into the helix. What the team came up with was a diode, because the current was 15 times stronger for negative voltages than for positive. The study's lead author Bingqian Xu decided to experiment on DNA to create minuscule components, since we can't exactly use silicon for parts that size. Astronomers Discover Colossal 'Super Spiral' Galaxies A strange new kind of galactic beast has been spotted in the cosmic wilderness. Dubbed "super spirals," these unprecedented galaxies dwarf our own spiral galaxy, the Milky Way, and compete in size and brightness with the largest galaxies in the universe. Super spirals have long hidden in plain sight by mimicking the appearance of typical spiral galaxies. A new study using archived NASA data reveals these seemingly nearby objects are in fact distant, behemoth versions of everyday spirals. Rare, super spiral galaxies present researchers with the major mystery of how such giants could have arisen. "We have found a previously unrecognized class of spiral galaxies that are as luminous and massive as the biggest, brightest galaxies we know of," said Patrick Ogle, an astrophysicist at the Infrared Processing and Analysis Center (IPAC) at the California Institute of Technology in Pasadena and lead author of a new paper on the findings published in The Astrophysical Journal. "It's as if we have just discovered a new land animal stomping around that is the size of an elephant but had shockingly gone unnoticed by zoologists." How a distant planet could have killed the dinosaurs A new paper revamps the age-old theory of Planet X – a distant planet with an gravitational pull that dislodges stray comets and sends them toward Earth.A theoretical giant planet orbiting at the far edges of our solar system may be redirecting stray comets and asteroids into the inner solar system, one or some of which could have caused the dinosaur extinction event on Earth. The theory might seem like science fiction, but a study by astrophysicist Daniel Whitmire offers scientific data to back up the claim. Dr. Whitmire, who now teaches math at the University of Arkansas, says the mysterious planet has been causing extinction events at regular intervals – every 27 million years – on Earth. Sound familiar? The new paper, published in the Monthly Notices of the Royal Astronomical Society, is a revision of a previous study Whitmore and his partner John Matese proposed in 1985. Search for alien signals expands to 20,000 star systems The search for radio signals from alien worlds is expanding to 20,000 star systems that were previously considered poor targets for intelligent extraterrestrial life, US researchers said Wednesday. The ExoMars spacecraft has blasted off from the Baikonur Cosmodrome in Kazakhstan to search for signs of life on the Red Planet. It's a mission that presents incredible scientific and engineering challenges - as it looks to unravel some of the mysteries of our Solar System. Watch this European Space Agency video to understand this mission. New Tetraquark Particle Sparks Doubts The new tetraquark — an arrangement of four quarks, the fundamental particles that build up the protons and neutrons inside atoms — was first announced in late February by physicists taking part in the DZero experiment at the Tevatron collider at the Fermi National Accelerator Laboratory (Fermilab) in Illinois. The finding represented a surprising configuration of quarks of four different flavors that was not predicted and could help elucidate the maddeningly complex rules that govern these particles. But now scientists at the Large Hadron Collider (LHC) — the world's largest particle accelerator, buried beneath Switzerland and France — say they have tried and failed to find confirming evidence for the particle in their own data. "We don't see any of these tetraquarks at all," says Sheldon Stone, a Syracuse University physicist who led the analysis for the Large Hadron Collider Beauty (LHCb) experiment. "We contradict their result." 7 Theories on the Origin of Life Life on Earth began more than 3 billion years ago, evolving from the most basic of microbes into a dazzling array of complexity over time. But how did the first organisms on the only known home to life in the universe develop from the primordial soup? One theory involved a “shocking” start. Another idea is utterly chilling. And one theory is out of this world! This article reveals the different scientific theories on the origins of life on Earth. NIST Creates Fundamentally Accurate Quantum Thermometer DNA data storage could last thousands of years Researchers in Switzerland have developed a method for writing vast amounts of information in DNA and storing it inside a synthetic fossil, potentially for thousands of years. In past centuries, books and scrolls preserved the knowledge of our ancestors, even though they were prone to damage and disintegration. In the digital era, most of humanity's collective knowledge is stored on servers and hard drives. But these have a limited lifespan and need constant maintenance. Scientists from ETH Zurich have taken inspiration from the natural world in a bid to devise a storage medium that could last for potentially thousands of years. They say that genetic material found in fossils hundreds of thousands of years old can be isolated and analyzed as it has been protected from environmental stresse Hints of new LHC particle get slightly stronger Hints of a mysterious new particle at the world's largest particle accelerator just got a little stronger. The excess of photons produced by particle collisions at the Large Hadron Collider (LHC) has kept physicists abuzz since it was discovered three months ago: it is now slightly more statistically significant but still falls well short of the certainty needed to claim a discovery. In December, physicists announced that they had seen an excess of pairs of γ-ray photons with a combined energy of around 750 gigaelectronvolts. The data came from ATLAS and CMS, the two largest detectors at the 27-kilometre LHC, which is at CERN, the European particle physics laboratory near Geneva, Switzerland. That excess of photons seen by the CMS experiment has now become slightly more significant, owing to a fresh analysis reported on 17 March at a conference in La Thuile, Italy. But to the disappointment of many, the significance seen by ATLAS actually went down a bit, as a result of a more conservative interpretation of the data. The data used in the latest CMS analysis is 23% larger as it includes collisions from early in the LHC’s 2015 run, when the detector’s magnet was switched off due to a problem in its cooling system. The magnetic field affects detector electronics, so data taken without the field needed careful and separate calibration. “The good news is, we now we have almost as much data as ATLAS,” says James Olsen, CMS physics coordinator and a physicist at Princeton University in New Jersey. Snake walk: The physics of slithering Innumerable critters have evolved superb ways to scuttle and slither - or even burrow and "swim" - across the most unhelpful of terrains: those that flow. If you've ever tried to walk up a sand dune, then you are familiar with the problem: unstable ground makes a mission out of locomotion. Now, imagine doing it on your belly. This is why a team of physicists is playing with snakes in a custom-built sand pit. The way they move is a marvel. (The snakes, not the physicists.) If you've ever tried to walk up a sand dune, then you are familiar with the problem: unstable ground makes a mission out of locomotion. Now, imagine doing it on your belly. Ms Perrin Schiebel is studying for a PhD in physics at the Georgia Institute of Technology in Atlanta, US. She has spent many months putting 10 of these snakes through their slippery paces in a sand-filled aquarium. "One of the things that's really interesting about snakes is that their entire body is, in this type of locomotion, in sliding contact with the ground," Ms Schiebel explains. Astronomers discover a new galaxy far, far(ther) away A team of astronomers say they have discovered a hot, star-popping galaxy that – at 13.4 billion light years away – is much farther than any galaxy previously identified, both in time and distance. Using a technique that has raised some skepticism among rival astronomers, they say they’ve identified a galaxy from a time when the universe was only about 400 million years old. That’s a time period commonly believed to be impossible to observe with today’s technology. The discovery far surpasses previous records for distance and time, and may be farthest that can be seen until a new space telescope is launched, the astronomers report in a paper published Thursday in Astrophysical Journal. For the last half-decade, NASA has resolutely declared that it has embarked on a Journey to Mars. Virtually every agency achievement has, in one way or another, been characterized as furthering this ambition. Even last summer when the New Horizons spacecraft flew by Pluto, NASA Administrator Charles Bolden said it represented “one more step” on the Journey to Mars. But as the end of President Obama’s second term in office nears, Congress has begun to assess NASA’s Mars ambitions. On Wednesday during a House space subcommittee hearing, legislators signaled that they were not entirely pleased with those plans. Comments from lawmakers, and the three witnesses called to the hearing, indicate NASA’s Journey to Mars may receive some pushback in the next year or two. Some of the most critical testimony came from John Sommerer, a space scientist who spent more than a year as chairman of a National Research Council technical panel reviewing NASA’s human spaceflight activities. That panel’s work, summarized in a 2014 report titled Pathways to Exploration, considered possible pathways to Mars. Never-Seen-Before Tetraquark Particle Possibly Spotted in Atom Smasher Evidence for a never-before-seen particle containing four types of quark has shown up in data from the Tevatron collider at the Fermi National Accelerator Laboratory (Fermilab) in Illinois. The new particle, a class of "tetraquark," is made of a bottom quark, a strange quark, an up quark and a down quark. The discovery could help elucidate the complex rules that govern quarks — the tiny fundamental particles that make up the protons and neutrons inside all the atoms in the universe. Protons and neutrons each contain three quarks, which is by far the most stable grouping. Pairs of quarks, called mesons, also commonly appear, but larger conglomerations of quarks are extremely rare. Scientists at the Large Hadron Collider (LHC) in Switzerland last year saw the first signs of a pentaquark—a grouping of five quarks—which had long been predicted but never seen. The first tetraquark was found in 2003 at the Belle experiment in Japan, and since then physicists have encountered a half dozen different arrangements. But the new one, if confirmed, would be special. “What’s unique in this case is that we basically have four quarks, which are all different—bottom, up, strange and down,” says Dmitri Denisov, co-spokesperson for the DZero experiment. “In all previous configurations usually two quarks are the same. Is this telling us something? I hope yes.” The unusual arrangement, dubbed X(5568) in a paper submitted toPhysical Review Letters, could reflect some deeper rule about how the different types, or “flavors,” of quarks bind together—a process enabled by the strongest force in nature, called, appropriately, the strong force. Physicists have a theory—called quantum chromodynamics—that describes how the strong force works, but it is incredibly unwieldy and difficult to make predictions with. “While we understand many features of the strong force, we don’t understand everything, especially how the strong force acts on large distances,” Denisov says. “And on a fundamental level we still don’t have a very good model of how quarks interact when there are quite a few of them joined together.” ET search: Look for the aliens looking for Earth By watching how the light dims as a planet orbits in front of its parent star, NASA’s Kepler spacecraft has discovered more than 1,000 worlds since its launch in 2009. Now, astronomers are flipping that idea on its head in the hope of finding and talking to alien civilizations. Scientists searching for extraterrestrial intelligence should target exoplanets from which Earth can be seen passing in front of the Sun, says René Heller, an astronomer at the Max Planck Institute for Solar System Research in Göttingen, Germany. By studying these eclipses, known as transits, civilizations on those planets could see that Earth has an atmosphere that has been chemically altered by life. “They have a higher motivation to contact us, because they have a better means to identify us as an inhabited planet,” Heller says. About 10,000 stars that could harbour such planets should exist within about 1,000 parsecs (3,260 light years) of Earth, Heller and Ralph Pudritz, an astronomer at McMaster University in Hamilton, Canada, report in the April issue of Astrobiology1. They argue that future searches for signals from aliens, such as the US$100-million Breakthrough Listen project, should focus on these stars, which fall in a band of space formed by projecting the plane of the Solar System out into the cosmos. Breakthrough Listen currently has no plans to search this region; it is targeting both the centre and the plane of our galaxy, which is not the same as the plane of the Solar System, as well as stars and galaxies across other parts of the sky. Physicists create first photonic Maxwell's demon Black holes banish matter into cosmic voids In recent decades, astronomers have cultivated a picture of the universe dominated by unseen matter, in which – on the largest scales – galaxies and everything they contain are concentrated into honeycomb-like filaments stretching around the edge of enormous voids. Until a recent study, the voids were thought to be almost empty. Now astronomers in Austria, Germany and the United States say these dark areas in space could contain as much as 20% of the ordinary matter of our cosmos. They also say that galaxies make up only 1/500th of the volume of the universe. The team, led by Dr. Markus Haider of the Institute of Astro- and Particle Physics at the University of Innsbruck in Austria, published these results in a new paper in Monthly Notices of the Royal Astronomical Society on February 24, 2016. How NASA's new telescope could unlock some mysteries of the universe One of the most highly anticipated astronomical missions of the next decade is now officially in the works at the NASA. The Wide Field Infrared Survey Telescope, or WFIRST, has been under study for years and it was formally decided Wednesday that the project would be moving forward. “WFIRST has the potential to open our eyes to the wonders of the universe, much the same way Hubble [Space Telescope] has,” said NASA Science Mission Directorate associate administrator John Grunsfeld in an agency release. “This mission uniquely combines the ability to discover and characterize planets beyond our own solar system with the sensitivity and optics to look wide and deep into the universe in a quest to unravel the mysteries of dark energy and dark matter.” 5D Black Holes Could Break Relativity Ring-shaped, five-dimensional black holes could break Einstein's theory of general relativity, new research suggests. There's a catch, of course. These 5D "black rings" don't exist, as far as anyone can tell. Instead, the new theoretical model may point out one reason why we live in a four-dimensional universe: Any other option could be a hot mess. "Here we may have a first glimpse that four space-time dimensions is a very, very good choice, because otherwise, something pretty bad happens in the universe," said Ulrich Sperhake, a theoretical physicist at the University of Cambridge in England. Reactor data hint at existence of fourth neutrino n tunnels deep inside a granite mountain at Daya Bay, a nuclear reactor facility some 55 kilometers from Hong Kong, sensitive detectors are hinting at the existence of a new form of neutrino, one of nature’s most ghostly and abundant elementary particles. Neutrinos, electrically neutral particles that sense only gravity and the weak nuclear force, interact so feebly with matter that 100 trillion zip unimpeded through your body every second. They come in three known types: electron, muon and tau. The Daya Bay results suggest the possibility that a fourth, even more ghostly type of neutrino exists — one more than physicists’ standard theory allows. Dubbed the sterile neutrino, this phantom particle would carry no charge of any kind and would be impervious to all forces other than gravity. Only when shedding its invisibility cloak by transforming into an electron, muon or tau neutrino could the sterile neutrino be detected. Definitive evidence “would open up a whole new avenue of research,” says particle physicist Stephen Parke of the Fermi National Accelerator Laboratory in Batavia, Ill. Possible evidence for the sterile particle comes from a mismatch between theory and experiment. If a nuclear reactor produces a beam of just one type of neutrino, theory predicts that some should change their identity as they travel to a far-off detector (SN Online: 10/6/15). Analyzing more than 300,000 electron antineutrinos (the antimatter counterpart of the electron neutrino) collected from the Daya Bay nuclear reactors during 217 days of operation, researchers found 6 percent fewer of the particles than predicted by the standard particle physics model. Particle physicist Kam-Biu Luk of the University of California, Berkeley and the Lawrence Berkeley National Laboratory and colleagues report the findings in the Feb. 12 Physical Review Letters. One explanation for the deficit is that some of the electron antineutrinos have transformed into an undetectable, lightweight sterile neutrino, about one-millionth the mass of an electron, says Luk. Other nuclear reactor studies, including an experiment at the Bugey reactor in Saint-Vulbas, France, have seen similar electron antineutrino deficits, he notes. Studies with muon antineutrino beams at some particle accelerators have seen an excess of electron antineutrinos, which might be attributed to a different kind of sleight-of-hand by the unseen sterile neutrinos. The Daya Bay result provides the most precise measure yet of the energies of antielectron neutrinos at a nuclear reactor. Even so, the statistical significance of the deficit is not high enough to rate the finding a discovery. The result is a “three-sigma” finding, meaning that there’s about a 0.3 percent probability that such a paucity of electron antineutrinos would have occurred if no sterile neutrino exists. Physicists generally want a discrepancy to have a significance of five-sigma, or a 0.00003 percent chance of being a fluke, before they will label it a discovery. Besides the hint of sterile neutrinos, the Daya Bay results reveal a second strange feature — an excess of electron antineutrinos (compared with theoretical predictions) at an energy of around 5 million electron volts. That could be a sign of completely new physics awaiting discovery (or simply that scientists don’t have a detailed enough grasp of the output of nuclear reactors). A revised understanding of that feature might even do away with the need for a lightweight sterile neutrino to explain the overall deficit in electron antineutrinos. But if definitive evidence for a light sterile neutrino is eventually found, it “would turn the theory community on its head,” says Parke, and could have a bigger impact than the discovery of the Higgs boson, the Nobel-winning finding that explains why elementary particles have mass. “Finding a sterile neutrino is extremely important because it would be the first discovery of a particle which cannot be accommodated in the framework of the so-called standard model,” says particle physicist Carlo Giunti of the University of Turin in Italy. One of the earliest experiments that suggested the presence of sterile neutrinos was the Liquid Scintillator Neutrino Detector, which operated at the Los Alamos National Laboratory in New Mexico from 1993 to 1998. The LSND found that muon antineutrinos beamed into 167 tons of mineral oil had morphed into electron antineutrinos in a way that seemed to require a fourth type of neutrino to exist. A follow-up experiment at Fermilab, called MiniBooNE, ran from 2002 to 2012, with equivocal results. Another Fermilab experiment, MicroBooNE, began operation last October. MicroBooNE is the first of three liquid argon detectors, spaced at different distances near neutrino sources at Fermilab, that will track with unprecedented precision the transformation of neutrinos from one type to another. Located 470 meters from Fermilab’s Booster Neutrino Beamline, MicroBooNE is the middle of the trio, to be joined in 2018 by ICARUS, the farthest detector, at a distance of about 600 meters from the beamline, and the Short-Baseline Near Detector, placed just 100 meters from the source. First results from the trio are expected in 2021, says experimental particle physicist Peter Wilson of Fermilab. The detectors will also serve as a prototype for the Deep Underground Neutrino Experiment, a large-scale experiment that will send Fermilab-generated neutrinos on a 1,300-kilometer journey to the Sanford Underground Research Facility near Lead, S.D. In the meantime, the Daya Bay collaboration has teamed up with another Fermilab experiment, the Main Injector Neutrino Oscillation Search, to continue to seek signs of the sterile neutrinos. Although data from accelerator and reactor experiments do not yet paint a consistent picture, “we will soon know better whether a light sterile neutrino is waiting for us to unveil,” says Luk. If a light sterile neutrino exists, it might have siblings about 1,000 times heavier. These particles could contribute to the as-yet-unidentified dark matter, the invisible gravitational glue that keeps galaxies from flying apart and shapes the large-scale structure of the universe. Fingerprints of this particle will be sought with an experiment called KATRIN, which examines the radioactive decay of tritium, a heavy isotope of hydrogen, at the Karlsruhe Institute of Technology in Germany. Sterile neutrinos that are even more massive, more than a trillion times heavier than the electron, could explain an ever bigger cosmic mystery — the mismatch between the amounts of matter and antimatter in the cosmos. Possessing an energy at least a million times greater than can be produced at the Large Hadron Collider, the world’s most powerful particle accelerator, a superheavy sterile neutrino in the early universe would have made a smidgen more matter than antimatter. Over time, the tiny imbalance, reproduced in countless nuclear reactions, would have generated the matter-dominated universe seen today (SN: 1/26/13, p. 18). “For cosmology, the [lightweight] sterile neutrino that we are talking about cannot solve the problem of the matter-antimatter asymmetry, but it is likely that the sterile neutrino is connected with other new particles that can solve the problem,” says Giunti. Scientists see another, more practical, benefit for studying neutrinos. By recording the antineutrino output of nuclear reactors, detectors can discern the relative amounts of plutonium and uranium, the raw materials for making nuclear weapons. Gram for gram, fissioned plutonium and uranium have distinctive fingerprints in both the energy and rate of antineutrinos they produce, says physicist Adam Bernstein of the Lawrence Livermore National Laboratory in California. Closeup monitoring of reactors, from a distance of 10 to 500 meters, has already been demonstrated; detectors capable of monitoring weapons activity from several hundred kilometers away is possible but will require additional research and funding, Bernstein says. Single-particle ‘spooky action at a distance’ finally demonstrated For the first time, researchers have demonstrated what Albert Einstein called "spooky action at a distance" using a single particle. And not only is it a huge deal for our understanding of quantum mechanics, it also proves that the physics genius got something wrong. Spooky action at a distance, or quantum entanglement, in a single particle is a strange form of entanglement that could greatly help to improve quantum computing and communications. Unlike regular quantum entanglement, which involves two particles being defined only by being opposites of each other, single particles that are entangled have a wave function that's spread over huge distances, but are never actually in more than one place While it isn't quite the magical wave of hand envisioned in Star Trek, 3D printers are still pretty close to the replicators seen from The Next Generation on, able to fabricate body parts, cars, and even food from raw materials. It's no wonder NASA wants them to build tools, rocket engines, and even housing on Mars. But now, NASA has launched a challenge meant to bring kids into the mix: it wants 3D printed food implements. It's not quite the same as, say, a 3D printed pizza. But what NASA wants is, essentially, kitchenware and food growing aides. In the Star Trek Replicator competition, it wants "non-edible, food-related item for astronauts to 3D print in the year 2050," which is around when we'll supposedly be on Mars. Ish. The design guidelines are fairly open ended, but there are a few ground rules. It must not be bigger than 6 inches cubed (6"x6"x6".) The K-12 student designing it must designate where it will be 3D printed, and why it's suited for that environment. It must advance long term space exploration. And it must involve only a single material (so there's no printing, say, a metal alloy and plastic into the same object.) The contest kicks off today, with entries due by May 1. Four finalists will win a 3D printer for their school and a PancakeBot for themselves, while the grand prize is a tour of the Intrepid Sea, Air, & Space Museum in New York with an astronaut, a bunch of Star Trek swag, and more. There are multiple eligible age groups, so if you have a school aged child, this could be just the contest for them. More information is available here. Photographs fade, books rot, and even hard drives eventually fester. When you take the long view, preserving humanity's collective culture isn't a marathon, it's a relay — with successive generations passing on information from one slowly-failing storage medium to the next. However, this could change. Scientists from the University of Southampton in the UK have created a new data format that encodes information in tiny nanostructures in glass. A standard-sized disc can store around 360 terabytes of data, with an estimated lifespan of up to 13.8 billion years even at temperatures of 190°C. That's as old as the Universe, and more than three times the age of the Earth. A new photograph of galaxy NGC 4889 may look peaceful from such a great distance, but it’s actually home to one of the biggest black holes that astronomers have ever identified. The Hubble Space Telescope allowed scientists to capture photos of the galaxy, located in the Coma Cluster about 300 million light-years away. The supermassive black hole hidden away in NGC 4889 breaks all kinds of records, even though it is currently classified as dormant. So how big is it, exactly? Well, according to our best estimates, the supermassive black hole is roughly 21 billion times the size of the Sun, and its event horizon (an area so dense and powerful that light can’t escape its gravity) measures 130 billion kilometers in diameter. That’s about 15 times the diameter of Neptune’s orbit around the Sun, according to scientists at the Hubble Space Telescope. At one point, the black hole was fueling itself on a process called hot accretion. Space stuff like gases, dust, and galactic debris fell towards the black hole and created an accretion disk. Then that spinning disk of space junk, accelerated by the strong gravitational pull of the largest known black hole, emitted huge jets of energy out into the galaxy. LIGO Discovers the Merger of Two Black Holes Big news: the Laser Interferometer Gravitational-Wave Observatory (LIGO) has detected its first gravitational-wave signal! Not only is the detection of this signal a major technical accomplishment and an exciting confirmation of general relativity, but it also has huge implications for black-hole astrophysics. What did LIGO see? LIGO is designed to detect the ripples in space-time created by two massive objects orbiting each other. These waves can reach observable amplitudes when a binary system consisting of two especially massive objects — i.e., black holes or neutron stars — reach the end of their inspiral and merge. LIGO has been unsuccessfully searching for gravitational waves since its initial operations in 2002, but a recent upgrade in its design has significantly increased its sensitivity and observational range. The first official observing run of Advanced LIGO began 18 September 2015, but the instruments were up and running in “engineering mode” several weeks before that. And it was in this time frame — before official observing even began! — that LIGO spotted its first gravitational wave signal: GW150914. One of LIGO’s two detection sites, located near Hanford in eastern Washington. [LIGO] The signal, detected on 14 September, 2015, provides astronomers with a remarkable amount of information about the merger that caused it. From the detection, the LIGO team has extracted the masses of the two black holes that merged, 36+5-4 and 29+4-4 solar masses, as well as the mass of the final black hole formed by the merger, ~62 solar masses. The team also determined that the merger happened roughly a billion light-years away (at a redshift of z~0.1), and the direction of the signal was localized to an area of ~600 square degrees (roughly 1% of the sky). Why is this detection a big deal? This is the first direct detection of gravitational waves, providing spectacular further confirmation of Einstein’s general theory of relativity. But the implications of GW150914 go far beyond this confirmation. This detection is a huge deal for astrophysics because it’s the first direct evidence we’ve had that: “Heavy” stellar-mass black holes exist. We’ve reliably measured black holes of masses up to 10–20 solar masses in X-ray binaries (binary systems in which a single neutron star or black hole accretes matter from a donor star). But this is the first proof we’ve found that stellar-mass black holes of >25 solar masses can form in nature. Binaries consisting of two black holes can form in nature. As we’ll discuss shortly, there are two theorized mechanisms for the formation of these black-hole binaries. Until now, however, there was no guarantee that either of those mechanisms worked! These black-hole binaries can inspiral and merge within the age of the universe. The formation of a black-hole binary is no guarantee that it will merge on a reasonable timescale: if the binary forms with enough separation, it could take longer than the age of the universe to merge. This detection proves that black-hole binaries can form with small enough separation to merge on observable timescales. What can we learn from GW150914? One of the key questions we’d like to answer is: how do binary black holes form? Two primary mechanisms have been proposed: A binary star system contains two stars that are each massive enough to individually collapse into a black hole. If the binary isn’t disrupted during the two collapse events, this forms an isolated black-hole binary. Single black holes form in dense cluster environments and then — because they are the most massive objects — sink to the center of the cluster. There they form pairs through dynamical interactions. Now that we’re able to observe black-hole binaries through gravitational-wave detections, one way we could distinguish between the two formation mechanisms is from spin measurements. If we discover a clear preference for the misalignment of the two black holes’ spins, this would favor formation in clusters, where there’s no reason for the original spins to be aligned. The current, single detection is not enough to provide constraints, but if we can compile a large enough sample of events, we can start to present a statistical case favoring one channel over the other. What does GW150914 mean for the future of gravitational-wave detection? The fact that Advanced LIGO detected an event even before the start of its first official observing run is certainly promising! The LIGO team estimates that the volume the detectors can probe will still increase by at least a factor of ~10 as the observing runs become more sensitive and of longer duration. In addition, LIGO is not alone in the gravitational-wave game. LIGO’s counterpart in Europe, Virgo, is also undergoing design upgrades to increase its sensitivity. Within this year, Virgo should be able to take data simultaneously with LIGO, allowing for better localization of sources. And the launch of (e)LISA, ESA’s planned space-based interferometer, will grant us access to a new frequency range, opening a further window to the gravitational-wave sky. The detection of GW150914 marks the dawn of a new field: observational gravitational-wave astronomy. This detection alone confirms much that was purely theory before now — and given that instrument upgrades are still underway, the future of gravitational-wave detection looks incredibly promising. This awesome video (produced by SXS lensing) shows an actual simulation of the black-hole merger GW150914. Time is slowed by a factor of 100, compared to the actual merger. The two black holes — of 29 and 36 solar masses — warp the space-time around them, causing the distorted view. Einstein's gravitational waves 'seen' from black holes Scientists are claiming a stunning discovery in their quest to fully understand gravity. They have observed the warping of space-time generated by the collision of two black holes more than a billion light-years from Earth. The international team says the first detection of these gravitational waves will usher in a new era for astronomy. It is the culmination of decades of searching and could ultimately offer a window on the Big Bang. The research, by the Ligo Collaboration, has been published today in the journal Physical Review Letters. The signals they detect are incredibly subtle and disturb the machines, known as interferometers, by just fractions of the width of an atom. But this black hole merger was picked up almost simultaneously by two widely separated Ligo facilities in the US. The merger radiated three times the mass of the sun in pure gravitational energy. "We have detected gravitational waves," Prof David Reitze, executive director of the Ligo project, told journalists at a news conference in Washington DC. "It's the first time the Universe has spoken to us through gravitational waves. Up until now, we've been deaf." Prof Karsten Danzmann, from the Max Planck Institute for Gravitational Physics and Leibniz University in Hannover, Germany, is a European leader on the collaboration. He said the detection was one of the most important developments in science since the discovery of the Higgs particle, and on a par with the determination of the structure of DNA. "There is a Nobel Prize in it - there is no doubt," he told the BBC. "It is the first ever direct detection of gravitational waves; it's the first ever direct detection of black holes and it is a confirmation of General Relativity because the property of these black holes agrees exactly with what Einstein predicted almost exactly 100 years ago." Gravitational waves are prediction of the Theory of General Relativity Their existence has been inferred by science but only now directly detected Accelerating masses will produce waves that propagate at the speed of light Detectable sources ought to include merging black holes and neutron stars Detecting the waves opens up the Universe to completely new investigations "Apart from testing (Albert Einstein's theory of) General Relativity, we could hope to see black holes through the history of the Universe. We may even see relics of the very early Universe during the Big Bang at some of the most extreme energies possible." Team member Prof Gabriela González, from Louisiana State University, said: "We have discovered gravitational waves from the merger of black holes. It's been a very long road, but this is just the beginning. "Now that we have the detectors to see these systems, now that we know binary black holes are out there - we'll begin listening to the Universe." The Ligo laser interferometers in Hanford, in Washington, and Livingston, in Louisiana, were only recently refurbished and had just come back online when they sensed the signal from the collision. This occurred at 10.51 GMT on 14 September last year. On a graph, the data looks like a symmetrical, wiggly line that gradually increases in height and then suddenly fades away. "We found a beautiful signature of the merger of two black holes and it agrees exactly - fantastically - with the numerical solutions to Einstein equations... it looked too beautiful to be true," said Prof Danzmann. "With gravitational waves, we do expect eventually to see the Big Bang itself," he told the BBC. In addition, the study of gravitational waves may ultimately help scientists in their quest to solve some of the biggest problems in physics, such as the unification of forces, linking quantum theory with gravity. At the moment, General Relativity describes the cosmos on the largest scales tremendously well, but it is to quantum ideas that we resort when talking about the smallest interactions. Being able to study places in the Universe where gravity is really extreme, such as at black holes, may open a path to new, more complete thinking on these issues. The separate paths bounce back and forth between damped mirrors Eventually, the two light parts are recombined and sent to a detector Gravitational waves passing through the lab should disturb the set-up Theory holds they should very subtly stretch and squeeze its space. This ought to show itself as a change in the lengths of the light arms (green). The photodetector captures this signal in the recombined beam Scientists have sought experimental evidence for gravitational waves for more than 40 years. Einstein himself actually thought a detection might be beyond the reach of technology. His theory of General Relativity suggests that objects such as stars and planets can warp space around them - in the same way that a billiard ball creates a dip when placed on a thin, stretched, rubber sheet. Gravity is a consequence of that distortion - objects will be attracted to the warped space in the same way that a pea will fall in to the dip created by the billiard ball. Inspirational moment. Although a fantastically small effect, modern technology has now risen to the challenge. Much of the R&D work for the Washington and Louisiana machines was done at Europe's smaller GEO600 interferometer in Hannover. "I think it's phenomenal to be able to build an instrument capable of measuring [gravitational waves]," said Prof Rowan. "It is hugely exciting for a whole generation of young people coming along, because these kinds of observations and this real pushing back of the frontiers is really what inspires a lot of young people to get into science and engineering." Earth-like Planets Have Earth-like Interiors But is this structure universal? Will rocky exoplanets orbiting other stars have the same three layers? New research suggests that the answer is yes - they will have interiors very similar to Earth. "We wanted to see how Earth-like these rocky planets are. It turns out they are very Earth-like," says lead author Li Zeng of the Harvard-Smithsonian Center for Astrophysics (CfA). To reach this conclusion Zeng and his co-authors applied a computer model known as the Preliminary Reference Earth Model (PREM), which is the standard model for Earth's interior. They adjusted it to accommodate different masses and compositions, and applied it to six known rocky exoplanets with well-measured masses and physical sizes. They found that the other planets, despite their differences from Earth, all should have a nickel/iron core containing about 30 percent of the planet's mass. In comparison, about a third of the Earth's mass is in its core. The remainder of each planet would be mantle and crust, just as with Earth. "We've only understood the Earth's structure for the past hundred years. Now we can calculate the structures of planets orbiting other stars, even though we can't visit them," adds Zeng. The new code also can be applied to smaller, icier worlds like the moons and dwarf planets in the outer solar system. For example, by plugging in the mass and size of Pluto, the team finds that Pluto is about one-third ice (mostly water ice but also ammonia and methane ices). The model assumes that distant exoplanets have chemical compositions similar to Earth. This is reasonable based on the relevant abundances of key chemical elements like iron, magnesium, silicon, and oxygen in nearby systems. However, planets forming in more or less metal-rich regions of the galaxy could show different interior structures. The team expects to explore these questions in future research. Physicists find signs of four-neutron nucleus The suspected discovery of an atomic nucleus with four neutrons but no protons has physicists scratching their heads. If confirmed by further experiments, this “tetraneutron” would be the first example of an uncharged nucleus, something that many theorists say should not exist. “It would be something of a sensation,” says Peter Schuck, a nuclear theorist at the National Center for Scientific Research in France who was not involved in the work. Details on the tetraneutron appear in the Feb. 5 Physical Review Letters. Have Gravitational Waves Finally Been Spotted? Astronomers may finally have found elusive gravitational waves, the mysterious ripples in the fabric of spacetime whose existence was first predicted by Albert Einstein in 1916, in his famous theory of general relativity. Scientists are holding a news conference Thursday (Feb. 11) at 10:30 a.m. EST (1530 GMT) at the National Press Club in Washington, D.C., to discuss the search for gravitational waves, which zoom through space at the speed of light. A media advisory describing the news conference is brief and somewhat vague, promising merely a "status report" on the ongoing hunt by the scientists using the Laser Interferometer Gravitational-Wave Observatory, or LIGO. But there's reason to suspect that researchers will announce a big discovery at the Thursday event. Astronomers build Earth-sized telescope to see Milky Way black hole An Earth-sized telescope will allow astronomers to glimpse the black hole at the centre of the Milky Way. Scientists across the globe are currently linking up telescopes across the globe to form the Event Horizon Telescope which will be the first instrument ever to take detailed pictures of a black hole. Even though the Milky Way’s black hole, known as Sagittarius A* (pronounced ‘Sagittarius A-star’), is four million times more massive than the sun, it is tiny to the eyes of astronomers. It is the equivalent of standing in New York and reading the date on a penny in Germany or seeing a grapefruit on the Moon for someone standing on Earth. But if successful, it will prove for the first time that black holes have ‘event horizons’ – an edge from which nothing can escape, not even light. "The goals of the EHT are to test Einstein's theory of general relativity, understand how black holes eat and generate relativistic outflows, and to prove the existence of the event horizon, or 'edge,' of a black hole," says Dan Marrone. The telescope gets its first major upgrade in centuries The general design of a telescope has remained more or less the same since the techology was first invented in the 17th century. Like an eye, the telescope collects light, and that light is then reflected to form an image. If you want to use one to see a really long way - into the depths of space, say - you'll need a really big one. “We can only scale the size and weight of telescopes so much before it becomes impractical to launch them into orbit and beyond,” says Danielle Wuchenich, senior research scientist at Lockheed Martin’s Advanced Technology Center in California. “Besides, the way our eye works is not the only way to process images from the world around us.” Lockheed Martin is now working on a new technology that promises to drastically reduce the size of telescope needed to see long distances. Its new system, SPIDER, (or 'Segmented Planar Imaging Detector for Electro-optical Reconnaissance', to give it its full title) does away with the large lenses or mirrors found in traditional refracting and reflecting telescopes, and replaces them with hundreds or thousands of tiny lenses. Dr Alan Duncan at Lockheed Martin explains: "SPIDER is a new way of collecting light to form images ... We collect the light, couple it into the silicon chip, move it around and combine it in a way that we can measure it with just ordinary detectors like you would have in your cellphone camera. And then (we) take all that data that's collected by those detectors, process it in a computer and form an image." Standard cosmology -- that is, the Big Bang Theory with its early period of exponential growth known as inflation -- is the prevailing scientific model for our universe It suggest that he entirety of space and time ballooned out from a very hot, very dense point into a homogeneous and ever-expanding vastness. This theory accounts for many of the physical phenomena we observe. But what if that's not all there was to it? A new theory from physicists at the U.S. Department of Energy's Brookhaven National Laboratory, Fermi National Accelerator Laboratory, and Stony Brook University, which will publish online on January 18 in Physical Review Letters, suggests a shorter secondary inflationary period that could account for the amount of dark matter estimated to exist throughout the cosmos. "In general, a fundamental theory of nature can explain certain phenomena, but it may not always end up giving you the right amount of dark matter," said Hooman Davoudiasl, group leader in the High-Energy Theory Group at Brookhaven National Laboratory and an author on the paper. "If you come up with too little dark matter, you can suggest another source, but having too much is a problem." Measuring the amount of dark matter in the universe is no easy task. It is dark after all, so it doesn't interact in any significant way with ordinary matter. Nonetheless, gravitational effects of dark matter give scientists a good idea of how much of it is out there. The best estimates indicate that it makes up about a quarter of the mass-energy budget of the universe, while ordinary matter -- which makes up the stars, our planet, and us -- comprises just 5 percent. Dark matter is the dominant form of substance in the universe, which leads physicists to devise theories and experiments to explore its properties and understand how it originated. Some theories that elegantly explain perplexing oddities in physics -- for example, the inordinate weakness of gravity compared to other fundamental interactions such as the electromagnetic, strong nuclear, and weak nuclear forces -- cannot be fully accepted because they predict more dark matter than empirical observations can support. This new theory solves that problem. Davoudiasl and his colleagues add a step to the commonly accepted events at the inception of space and time. In standard cosmology, the exponential expansion of the universe called cosmic inflation began perhaps as early as 10-35 seconds after the beginning of time -- that's a decimal point followed by 34 zeros before a 1. This explosive expansion of the entirety of space lasted mere fractions of a fraction of a second, eventually leading to a hot universe, followed by a cooling period that has continued until the present day. Then, when the universe was just seconds to minutes old -- that is, cool enough -- the formation of the lighter elements began. Between those milestones, there may have been other inflationary interludes, said Davoudiasl. "They wouldn't have been as grand or as violent as the initial one, but they could account for a dilution of dark matter," he said. In the beginning, when temperatures soared past billions of degrees in a relatively small volume of space, dark matter particles could run into each other and annihilate upon contact, transferring their energy into standard constituents of matter-particles like electrons and quarks. But as the universe continued to expand and cool, dark matter particles encountered one another far less often, and the annihilation rate couldn't keep up with the expansion rate. "At this point, the abundance of dark matter is now baked in the cake," said Davoudiasl. "Remember, dark matter interacts very weakly. So, a significant annihilation rate cannot persist at lower temperatures. Self-annihilation of dark matter becomes inefficient quite early, and the amount of dark matter particles is frozen." However, the weaker the dark matter interactions, that is, the less efficient the annihilation, the higher the final abundance of dark matter particles would be. As experiments place ever more stringent constraints on the strength of dark matter interactions, there are some current theories that end up overestimating the quantity of dark matter in the universe. To bring theory into alignment with observations, Davoudiasl and his colleagues suggest that another inflationary period took place, powered by interactions in a "hidden sector" of physics. This second, milder, period of inflation, characterized by a rapid increase in volume, would dilute primordial particle abundances, potentially leaving the universe with the density of dark matter we observe today. "It's definitely not the standard cosmology, but you have to accept that the universe may not be governed by things in the standard way that we thought," he said. "But we didn't need to construct something complicated. We show how a simple model can achieve this short amount of inflation in the early universe and account for the amount of dark matter we believe is out there." Proving the theory is another thing entirely. Davoudiasl said there may be a way to look for at least the very feeblest of interactions between the hidden sector and ordinary matter. "If this secondary inflationary period happened, it could be characterized by energies within the reach of experiments at accelerators such as the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider," he said. Only time will tell if signs of a hidden sector show up in collisions within these colliders, or in other experimental facilities. Stephen Hawking: Black Holes Have 'Hair Black holes may sport a luxurious head of "hair" made up of ghostly, zero-energy particles, says a new hypothesis proposed by Stephen Hawking and other physicists. The new paper, which was published online Jan. 5 in the preprint journal arXiv, proposes that at least some of the information devoured by a black hole is stored in these electric hairs. Still, the new proposal doesn't prove that all the information that enters a black hole is preserved. the Riemann Hypothesis has finally been SOLVED by a Nigerian professor Riemann Hypothesis is considered one of the hardest maths problem Devised in 1859, it has been resolved by professor Dr Opeyemi Enoch He has been given $1million (£658,000) for the work into prime numbers Riemann Hypothesis was one of the seven Millennium Problems in Mathematics set by the Clay Mathematics Institute in 2000 Read more: Physicists propose the first scheme to teleport the memory of an organism In "Star Trek", a transporter can teleport a person from one location to a remote location without actually making the journey along the way. Such a transporter has fascinated many people. Quantum teleportation shares several features of the transporter and is one of the most important protocols in quantum information. In a recent study, Prof. Tongcang Li at Purdue University and Dr. Zhang-qi Yin at Tsinghua University proposed the first scheme to use electromechanical oscillators and superconducting circuits to teleport the internal quantum state (memory) and center-of-mass motion state of a microorganism. They also proposed a scheme to create a Schrodinger's cat state in which a microorganism can be in two places at the same time. This is an important step towards potentially teleporting an organism in future. Scientists struggle to stay grounded after possible gravitational wave signal Not for the first time, the world of physics is abuzz with rumours that gravitational waves have been detected by scientists in the US. Lawrence Krauss, a cosmologist at Arizona State university, tweeted that he had received independent confirmation of a rumour that has been in circulation for months, adding: “Gravitational waves may have been discovered!!” — Lawrence M. Krauss (@LKrauss1) January 11, 2016"> The excitement centres on a longstanding experiment known as the Advanced Laser Interferometer Gravitational-Wave Observatory (Ligo) which uses detectors in Hanford, Washington, and Livingston, Louisiana to look for ripples in the fabric of spacetime. According to the rumours, scientists on the team are in the process of writing up a paper that describes a gravitational wave signal. If such a signal exists and is verified, it would confirm one of the most dramatic predictions of Albert Einstein’s century-old theory of general relativity. Krauss said he was 60% confident that the rumour was true, but said he would have to see the scientists’ data before drawing any conclusions about whether the signal was genuine or not. Researchers on a large collaboration like Ligo will have any such paper internally vetted before sending it for publication and calling a press conference. In 2014, researchers on another US experiment, called BICEP2, called a press conference to announce the discovery of gravitational waves, but others have since pointed out that the signal could be due entirely to space dust. Speaking about the LIGO team, Krauss said: “They will be extremely cautious. There’s no reason for them to make a claim they are not certain of.” NASA's Kepler Comes Roaring Back with 100 New Exoplanet Finds Physicists figure out how to retrieve information from a black hole Black holes earn their name because their gravity is so strong not even light can escape from them. Oddly, though, physicists have come up with a bit of theoretical sleight of hand to retrieve a speck of information that's been dropped into a black hole. The calculation touches on one of the biggest mysteries in physics: how all of the information trapped in a black hole leaks out as the black hole "evaporates." Many theorists think that must happen, but they don't know how. Unfortunately for them, the new scheme may do more to underscore the difficulty of the larger "black hole information problem" than to solve it. "Maybe others will be able to go further with this, but it's not obvious to me that it will help," says Don Page, a theorist at the University of Alberta in Edmonton, Canada, who was not involved in the work. You can shred your tax returns, but you shouldn't be able to destroy information by tossing it into a black hole. That's because, even though quantum mechanics deals in probabilities—such as the likelihood of an electron being in one location or another—the quantum waves that give those probabilities must still evolve predictably, so that if you know a wave's shape at one moment you can predict it exactly at any future time. Without such "unitarity" quantum theory would produce nonsensical results such as probabilities that don't add up to 100%. But suppose you toss some quantum particles into a black hole. At first blush, the particles and the information they encode is lost. That's a problem, as now part of the quantum state describing the combined black hole-particles system has been obliterated, making it impossible to predict its exact evolution and violating unitarity. Now, Aidan Chatwin-Davies, Adam Jermyn, and Sean Carroll of the California Institute of Technology in Pasadena have found an explicit way to retrieve information from one quantum particle lost in a black hole, using Hawking radiation and the weird concept of quantum teleportation. New study asks: Why didn't the universe collapse? he models that best describe the Big Bang and birth of the universe have one glaring problem. Most of them predict a collapse almost immediately after inflation. There was nothing, then there was something. And then there was nothing again. As we know from living and breathing and looking up at a sky action-packed with cosmic activity, there's definitely something more than nothing out there. So why is there still something? Why did the universe's tendency to expand overcome its tendency to collapse? A new study published in the Physical Review Letters is just the latest to try to inch closer to a place where physicists might be able to answer those questions. In this particular paper, researchers try to work out the details of the relationship between Higgs boson particles and gravity -- a relationship scientists believe kept an early, unstable universe from collapsing. Their latest calculations confirm that the stronger the bond between Higgs fields and gravity, the greater the chance of instability and a transition to a negative energy vacuum state, a place with little energy only a few particles popping in and out of existence. A coupling strength above one would have certainly spelled doom for the early universe, scientists at the University of Copenhagen determined. The new math helps narrow the likely coupling range to between 0.1 and 1. Physicists in Europe Find Tantalizing Hints of a Mysterious New Particl Two teams of physicists working independently at the Large Hadron Collider that they had seen traces of what could be a new fundamental particle of nature. One possibility, out of a gaggle of wild and not-so-wild ideas springing to life's the day went on, is that the particle — assuming it is real — is a heavier version of the Higgs boson, a particle that explains why other particles have mass. Another is that it is a graviton, the supposed quantum carrier of gravity, whose discovery could imply the existence of extra dimensions of space-time. At the end of a long chain of “ifs” could be a revolution, the first clues to a theory of nature that goes beyond the so-called Standard Model, which has ruled physics for the last quarter-century. It is, however, far too soon to shout “whale ahoy,” physicists both inside and outside CERN said, noting that the history of particle physics is rife with statistical flukes and anomalies that disappeared when more data was compiled. A coincidence is the most probable explanation for the surprising bumps in data from the collider, physicists from the experiments cautioned, saying that a lot more data was needed and would in fact soon be available. “I don’t think there is anyone around who thinks this is conclusive,” said Kyle Cranmer, a physicist from New York University who works on one of the CERN teams, known as Atlas. “But it would be huge if true,” he said, noting that many theorists had put their other work aside to study the new result. German physicists see landmark in nuclear fusion quest Scientists in Germany said Thursday they had reached a milestone in a quest to derive energy from nuclear fusion, billed as a potentially limitless, safe and cheap source. Nuclear fusion entails fusing atoms together to generate energy -- a process similar to that in the Sun -- as opposed to nuclear fission, where atoms are split, which entails worries over safety and long-term waste. After spending a billion euros ($1.1 billion) and nine years' construction work, physicists working on a German project called the "stellarator" said they had briefly generated a super-heated helium plasma inside a vessel -- a key point in the experimental process. Scientists detect the magnetic field that powers our galaxy’s supermassive black hole The Milky Way, like most galaxies, has a supermassive black hole sitting right in its center. Now, for the first time, scientists have detected a magnetic field just outside the event horizon — or outer boundary — of that black hole. Why do we care? Because that magnetic field is probably what makes our neighborhood black hole so powerful. Controversial experiment sees no evidence that the universe is a hologram It's a classic underdog story: Working in a disused tunnel with a couple of lasers and a few mirrors, a plucky band of physicists dreamed up a way to test one of the wildest ideas in theoretical physics—a notion from the nearly inscrutable realm of "string theory" that our universe may be like an enormous hologram. However, science doesn't indulge sentimental favorites. After years of probing the fabric of spacetime for a signal of the "holographic principle," researchers at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, have come up empty, as they will report tomorrow at the lab. The null result won't surprise many people, as some of the inventors of the principle had complained that the experiment, the $2.5 million Fermilab Holometer, couldn't test it. But Yanbei Chen, a theorist at the California Institute of Technology in Pasadena, says the experiment and its inventor, Fermilab theorist Craig Hogan, deserve some credit for trying. "At least he's making some effort to make an experimental test," Chen says. "I think we should do more of this, and if the string theorists complain that this is not testing what they're doing, well, they can come up with their own tests." The holographic principle springs from the theoretical study of black holes, spherical regions where gravity is so intense that not even light can escape. Theorists realized that a black hole has an amount of disorder, or entropy, that is proportional to its surface area. As entropy is related to information content, some theorists suggested that an information-area connection might be extended to any properly defined volume of space and time, or spacetime. Thus, crudely speaking, the maximum amount of information contained in a 3D region of space would be proportional its 2D surface area. The universe would then work a bit like a hologram, in which a 2D pattern captures a 3D image. How to encrypt a message in the afterglow of the big bang If you’ve got a secret to keep safe, look to the skies. Physicists have proposed using the afterglow of the big bang to make encryption keys. The security of many encryption methods relies on generating large random numbers to act as keys to encrypt or decipher information. Computers can spawn these keys with certain algorithms, but they aren’t truly random, so another computer armed with the same algorithm could potentially duplicate the key. An alternative is to rely on physical randomness, like the thermal noise on a chip or the timing of a user’s keystrokes. Now Jeffrey Lee and Gerald Cleaver at Baylor University in Waco, Texas, have taken that to the ultimate extreme by looking at the cosmic microwave background (CMB), the thermal radiation left over from the big bang. LISA Pathfinder Heads to Space A trailblazing mission took to the skies early this morning as a Vega rocket carrying the LISA Pathfinder lit up the night over Kourou, French Guiana. Originally known as the Small Missions for Advanced Research in Technology (SMART 2) and a forerunner to the full-fledged Evolved Laser Interferometer Space Antenna (eLISA) project, LISA Pathfinder will test the technologies key to conducting long-baseline laser interferometry in space. Coming almost exactly 100 years after Einstein proposed his theory of general relativity, this mission will prove vital in the hunt for one of the theory’s more bizarre predictions: gravitational waves. The equations of general relativity say that accelerating massive objects, such as exploding stars or a pair of whirling black holes, ought to send ripples through spacetime. There’s solid indirect evidence that gravitational waves exist, but direct detection has eluded scientists so far. LISA Pathfinder paves the way for eLISA, which will take that hunt into space. Slated for launch in 2034, eLISA will use three free-flying spacecraft to create a triangular baseline a million kilometers on a side — a feat impossible on Earth. Lasers will measure the position of two masses suspended at the end of each arm, and then researchers will analyze the data to look for the very slight jiggling induced by gravitational waves passing by. The unique setup and location will give eLISA an unprecedented sensitivity . Scientists Create New Kind Of Diamond At Room Temperature Researchers have created a new phase of solid carbon with qualities previously thought to be impossible that can be used to create diamonds at room temperature and the same atmospheric pressure as the ambient air. Scientists at North Carolina State University call it Q-carbon and say it is distinct from the other known solid forms of carbon – graphite and diamond. “The only place it may be found in the natural world would be possibly in the core of some planets,” says NC State’s Jay Narayan, lead author of three papers on the findings, including one published today in Journal of Applied Physics. Q-carbon is ferromagnetic, which he says was thought to be impossible,and is also harder than diamond and can glow when exposed to even a small amount of energy. Japanese scientists create touchable holograms A group of Japanese scientists have created touchable holograms, three dimensional virtual objects that can be manipulated by human hand. Using femtosecond laser technology the researchers developed 'Fairy Lights, a system that can fire high frequency laser pulses that last one millionth of one billionth of a second. The pulses respond to human touch, so that - when interrupted - the hologram's pixels can be manipulated in mid-air. Positrons Are Plentiful In Ultra-Intense Laser Blasts Physicists from Rice University and the University of Texas at Austin have found a new recipe for using intense lasers to create positrons — the antiparticle of electrons — in record numbers and density. In a series of experiments described recently in the online journal Scientific Reports published by Nature, the researchers used UT’s Texas Petawatt Laser to make large number of positrons by blasting tiny gold and platinum targets. Although the positrons were annihilated in a fraction of a microsecond, the experiments have implications for new realms of physics and astrophysics research, medical therapy and perhaps even space travel, said Rice physicist Edison Liang, lead author of the study. “There are many futuristic technologies related to antimatter that people have been dreaming about for the last 50 years,” said Liang, the Andrew Hays Buchanan Professor of Astrophysics. “One is that antimatter is the most efficient form of energy storage. When antimatter annihilates with matter, it becomes pure energy. Nothing is left behind, unlike in fusion or fission or chemical-based reactions.” Scientists Link Moon’s Tilt and Earth’s Gold At its birth, the moon was quite close to the Earth, probably within 20,000 miles. Because of the tidal pulls between the Earth and moon, the moon’s orbit has slowly been spiraling outward ever since, and as it does, Earth’s pull diminishes, and the pull of the sun becomes more dominant. By now, with the moon a quarter million miles from Earth, the sun’s gravity should have tipped the moon’s orbit to lie in the same plane as the orbits of the planets. But it has not. The moon’s orbit is about 5 degrees askew. “That the lunar inclination is as small as it is gives us some confidence that the basic idea of lunar formation from an equatorial disk of debris orbiting the proto-Earth is a good one,” said Kaveh Pahlevan, a planetary scientist at the Observatory of the Côte d’Azur in Nice, France. “But the story must have a twist.” Writing in this week’s issue of the journal Nature, Dr. Pahlevan and his observatory colleague Alessandro Morbidelli propose the twist. The moon did indeed form in the Earth’s equatorial plane, the scientists said, but then a few large objects, perhaps as large as the moon, zipping through the inner solar system repeatedly passed nearby over a few tens of millions of years and tipped the moon’s orbit. New NASA technology straight out of "Star Trek" could help scientists detect life on other worlds. The device, dubbed the "chemical laptop," is a miniature, portable laboratory that resembles the TV show's famous tricorder scanning device, and is designed to make data collection easier and faster than ever before. The laptop, currently in development at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, is a chemical analyzer made to detect both amino acids and fatty acids, often called "the building blocks of life," in samples from extraterrestrial terrain. Amino acids bind together to create proteins, which are vital to almost all processes that occur within a cell, and fatty acids are an important component of cell membranes, so researchers believe finding both could indicate that life is now or was once present. A Century Ago, Einstein’s Theory of Relativity Changed Everything By the fall of 1915, Albert Einstein was a bit grumpy. And why not? Cheered on, to his disgust, by most of his Berlin colleagues, Germany had started a ruinous world war. He had split up with his wife, and she had decamped to Switzerland with his sons. He was living alone. A friend, Janos Plesch, once said, “He sleeps until he is awakened; he stays awake until he is told to go to bed; he will go hungry until he is given something to eat; and then he eats until he is stopped.” Is Earth Growing a Hairy Dark Matter 'Beard'? Dark matter is thought to be everywhere, literally, but we can’t see it; we can only detect its gravitational presence over large cosmic scales. Now, theoretical physicists are theorizing what configuration the dark stuff may take around Earth. And it’s becoming a bit of a hairy subject. If we are to take the findings of a recent computer simulation to heart, it looks as if the planets in our solar system are growing rather trendy dark matter “beards,” an idea that not only reveals previously unknown interplanetary fashion trend, it could also provide a guide as to where to seek out direct evidence of the invisible matter that is thought to make up 85 percent of the mass of the entire universe. Scientists caught a new planet forming for the first time ever When a new star is born, it creates a disk full of gas and dust — the stuff of planetary formation. But it's hard to catch alien stars in the process of planetary baby-making, because the same dust that creates planets helps obscure these distant solar systems from our sight. We've found young planets and old ones alike, but none of them have actually been in the process of forming — until now. This new study was led by University of Arizona graduate student Stephanie Sallum and Kate Follette, a former fellow graduate student who has since moved on to postdoctoral research at Stanford University. The two women were working on separate PhD projects, but had decided to focus on the same star — LkCa15, located 450 light years from Earth. "The reason we selected this system is because it’s built around a very young star that has material left over from the star-formation process," Follette said in a statement. "It’s like a big doughnut. This system is special because it’s one of a handful of disks that has a solar-system size gap in it. And one of the ways to create that gap is to have planets forming in there." The women and their colleagues set high-powered telescopes to look at the system and used a new technique to look for protoplanets. They searched for the light emitted by hydrogen as the gas falls toward a newly forming planet. That process is hot — roughly 17,500 degrees Fahrenheit — and it produces a signature red glow. Experiment records extreme quantum weirdness An experiment in Singapore has pushed quantum weirdness close to its absolute limit. Researchers from the Centre for Quantum Technologies (CQT) at the National University of Singapore and the University of Seville in Spain have reported the most extreme 'entanglement' between pairs of photons ever seen in the lab. The result was published 30 October 2015 in Physical Review Letters. The achievement is evidence for the validity of quantum physics and will bolster confidence in schemes for quantum cryptography and quantum computing designed to exploit this phenomenon. "For some quantum technologies to work as we intend, we need to be confident that quantum physics is complete," says Poh Hou Shun, who carried out the experiment at CQT. "Our new result increases that confidence," he says. The researchers looked at 33.2 million optimized photon pairs. Each pair was split up and the photons measured separately, then the correlation between the results quantified. In such a Bell test, the strength of the correlation says whether or not the photons were entangled. The measures involved are complex, but can be reduced to a simple number. Any value bigger than 2 is evidence for quantum effects at work. But there is also an upper limit. Quantum physics predicts the correlation measure cannot get any bigger than 2sqrt(2) ~2.82843. In the experiment at CQT, they measure 2.82759 +/- 0.00051 - within 0.03% of the limit. If the peak value were the top of Everest, this would be only 2.6 metres below the summit. Scientists look into hydrogen atom, find old recipe for pi Strong forces make antimatter stick Antimatter is a shadowy mirror image of the ordinary matter we are familiar with. For the first time, scientists have measured the forces that make certain antimatter particles stick together. The findings, published in Nature, may yield clues to what led to the scarcity of antimatter in the cosmos today. The forces between antimatter particles - in this case antiprotons - had not been measured before. If antiprotons were found to behave in a different way to their "mirror images" (the ordinary proton particles that are found in atoms) it might provide a potential explanation for what is known as "matter/antimatter asymmetry". Birth of universe modeled in massive data simulation Researchers are sifting through an avalanche of data produced by one of the largest cosmological simulations ever performed, led by scientists at the U.S. Department of Energy's (DOE) Argonne National Laboratory. The simulation, run on the Titan supercomputer at DOE's Oak Ridge National Laboratory, modeled the evolution of the universe from just 50 million years after the Big Bang to the present day - from its earliest infancy to its current adulthood. Over the course of 13.8 billion years, the matter in the universe clumped together to form galaxies, stars and planets; but we're not sure precisely how. Modern Mystery: Ancient Comet Is Spewing Oxygen The Rosetta spacecraft has detected molecular oxygen in the gas streaming off comet 67P/Churyumov-Gerasimenko, a curious finding that has scientists rethinking the ingredients that were present in the early solar system. What's mystifying astronomers about the new find is why the oxygen wasn't annihilated during the solar system's formation. Molecular oxygen is extremely reactive with hydrogen, which was swirling in abundance as the sun and planets were created. Current solar system models suggest the molecular oxygen should have disappeared by the time 67P was created, about 4.6 billion years ago. Ingredients for Life Were Always Present on Earth, Comet Suggests The basic building blocks of life may have been present on Earth from the very beginning. Astronomers detected 21 different complex organic molecules streaming from Comet Lovejoy during its highly anticipated close approach to the sun this past January. Many of these same carbon-containing compounds have also been spotted around newly forming sunlike stars, researchers said. "This suggests that our proto-planetary nebula was already enriched in complex organic molecules (as disk models suggested) when comets and planets formed," study lead author Nicolas Biver, of the Paris Observatory, told via email. Life May Have Begun 4.1 Billion Years Ago on an Infant Earth Life may have emerged on Earth 4.1 billion years ago, much earlier than scientists had thought, and relatively soon after the planet formed, researchers say. Previous research suggested life may have arisen on Earth 3.83 billion years ago. The new findings suggest life started 270 million years earlier, and only about 440 million years after Earth formed about 4.54 billion years ago. If life on Earth did spring up relatively quickly, that suggests life could be abundant in the universe, scientists added. Earth Bloomed Early: A Fermi Paradox Solution? Perfectly accurate clocks turn out to be impossible Our Universe: It's the 'Simplest' Thing We Know This conclusion may sound counterintuitive; after all, to fully understand the true complexities of Nature, you need to think bigger, study things on finer and finer scales, add new variables to equations, and think up "new" and "exotic" physics. Eventually we'll discover what dark matter is; eventually we'll gain a grasp of where those gravitational waves are hiding – if only our theoretical models were more advanced and more... complex. Baylor Physicist Appointed to Management Team of Major Scientific Experiment at CERN They're Out There! Most People Believe in E.T. Are humans alone in the universe? A majority of people, particularly guys, in the United States, United Kingdom and Germany say they believe that intelligent life is out there. Fifty-six percent of Germans, 54 percent of Americans and 52 percent of people from the United Kingdom believe that alien life capable of communication lives somewhere among the stars, according to a new survey by the marketing research firm YouGov. News Archives
d651f2edfbefea0a
Monday, May 17, 2010 Abramowitz/Stegun goes online Did you ever need to learn about the properties of some obscure mathematical function which turns up when you try to solve, say, the Schrödinger equation with a linear potential? In the times before Wikipedia and Eric Weisstein's World of Mathematics/MathWorld, the usual way to proceed was to go to the library and look up in the "Abramowitz/Stegun", a compilation of formulas, relations, graphs and data tables for all kinds of functions you can think of. Airy functions Ai(x), Bi(x) and M(x). Over the last years, Milton Abramowitz' and Irene A. Stegun's time-honored "Handbook of Mathematical Functions" has been carried over to the internet age as the Digital Library of Mathematical Functions. Published by the US National Institute for Standards and Technology (NIST), Parts of the DLMF have been available since some time, but the complete site went online just last week, on May 11. In comparison to the old printed book, there are more functions and formulas, which all can be copied as Latex or MathML code. And while the function graphs at MathWorld are interactive, the DLMF features more detailed descriptions of applications in mathematics and physics, and links to freely available software libraries. Should I ever need to code Jacobian Elliptic Functions, I'll know where to look them up. Via bit-player, where you can also read more about the history of the Abramowitz/Stegun. Bryan said... Or, you could search WolframAlpha. You can find the Jacobi elliptic equations here: Clicking any one takes you to a page where you can read about it and plot it. Simon said... I think you meant: The links then take you through to either the mathematica documentation, the functions site or mathworld. The functions site is probably the most useful and contains a heap of identities and relations for the functions. It's been my online replacement for A/S for the last few years. Igor Khavkine said... Well, FINALLY! The completion of the DLMF has been "around the corner" for at least five years. This should have been bigger news. Or not, judging from the muted reactions around me... :-) BTW, I believe the DLMF is not an update of A&S, but a complete rewrite. Bee said... Thanks for letting us know! Question: I've tried at some point to find online integral tables, but haven't found much. Does anybody know a good resource? I know Maple/Mathematica can do a lot of integrals even analytically, but the result isn't always useful and sometimes I need the fine print. Best, Igor Khavkine said... If you can read Russian (well, these kinds of books don't actually have a lot of words in them), the magical names Prudnikov, Brychkov, and Marichev will help. Kay zum Felde said... Hi Stefan, great news! Best, Kay Bee said... Igor: Was that a reply to my question? Dunno, I've turned Wolfram upside down (granted, that was 2 years back or so), but really couldn't find things like integral over products of incomplete gamma functions or exp with a function in the argument etc. I mean, I have two books with integral tables (yes, it's some Russians, but I forget the names, it's the standard tables you find in any library), but I don't usually take them with me when I travel, thus my question. I've found parts of them online (needless to say that were the parts I didn't need), so I've been wondering since if there isn't a more complete site with integral tables. Best, Marco said... Hi Bee, The famous book: 'Table of Integrals, Series, and Products' by Gradshteyn and Ryzhik can be bought in CD format. You could take it with you or copy it on your notebook. Bee said... Right, Gradshteyn, that's the book that I have (two volumes actually)! So I'll have to buy a CD... pooh. And my laptop doesn't have a CDrom, so what's the point? Luke said... Find a computer with a CD drive and rip it to a flash drive? That'd work I'd imagine. Bee said... Sure, but it's so cumbersome. Why not just have a website online, if necessary one that charges a fee? stefan said... WolframAlpha relies, as far as functions and their properties and relations are concerned, on MathWorld and MathWorld, at least, usually refers to the Abramowitz/Stegun, and my impression is that the DLMF is more comprehensive than the Wolfram collection, albeit not as fancy, and without the impressive interactive plotting functionality of the Wolfram sites... On the other hand, it is easy to copy Latex code of formulas from DLMF, and I would have appreciated the links to the function libraries in Fortran and C that they now offer when I had to do a few numerical calculations involving Bessel and Airy functions. BTW, has indeed a few integrals, e.g. here for the Incomplete Gamma Function. No idea how this compares with the classical tables such as Ryzhik Gradshteyn. I have just seen, the latest edition of the Ryzhik Gradshteyn is searchable via amazons "look inside". BTW, does someone have some experience with ? Cheers, Stefan Luke said... No idea. I'd imagine the demand is simply not that great. Out of curiosity, what parts of the tables do you need? As in, why doesn't Maple/Mathematica not sate your need? Luke said... I've used Wolfram Integrator. It works quite nicely for what I've needed to do. I haven't tried anything too complicated however but it definitely is able to handle most integrals I plug into it. Igor Khavkine said... Bee, yes my last post was a two part answer. The Wolfram site is great if you know the class of functions you are expected to get. Then the Integral Representations and Integration categories will have lots of potentially useful formulas. The Russian books of integrals are certainly available online. However the method of obtaining them may offend some people's sensibilities. Those two are likely your best options in terms of online content, other than specialized literature scattered throughout research journals. Igor Khavkine said... @Stefan: is essentially a web interface to Mathematica's Integrate command. Uncle Al said... Flash drives have an unexpected benefit: Homeland Severity seize and copy them. You can vote 16 GB of "no" every time you pass through a US airport. A necklace of high capacity flash drives brimming with gibberish is the right thing to do. Intelligence requires constant preening, but stupidity is an engine of its own creation. Gorge the congenitally inconsequential. William said... Thanks Stefan, I added it to my Favs. (Now I have 4085 Favs in 50 root folders and 667 sub-folders, lol.) Many year ago, I created an equation for a point in 3D, F(x,y,z), such that it would be zero everywhere in space, except for a specified (x1, y1, z1) where F(x1,y1,z1) would have the value of 1. I used all natural, HS-level math functions to do it. I wonder if there's an equation for a point is at the NIST site. I'll look sometime. What was curious, is that I could make the volume of the point arbitrarily small, but never could get it to be exactly zero. That made me wonder if a zero-dimensional, zero-volume point had any meaning or reality. I concluded that in reality, any "point" had to be a "fuzzy point" with arbitrarily small but non-zero size, since that was the limitation I could not overcome mathematically. Also, it was easy to add a time dimension and have the point spin in any one, or two or three orthogonal directions; although that seemed odd to have a point spin. Oh, there was a problem: the point (magnitude +1) always came with an anti-point (magnitude -1), and six other quasi-points ("ghost points" as I called them, since they had a markedly different form and not fully 3-D structure to them ... curiously, the magnitudes of the six quasi-points were: +1/3, +1/3, +2/3 and -1/3, -1/3, -2/3). The extraneous points were annoying to me, since I just wanted the equation to have a value (magnitude) of 1.0 at only one specific point in space. But I couldn't eliminate the anti-point and 6 quasi-points from the equation ... they were an inevitable part of equation. So I modified the equation such that the anti-point and 6 quasi- (ghost) points were always at an infinite distance from the point at (x1, y1, z1) ... so that then, in a sense, they did not exist. Still, when I had the point spin, I couldn't help but realize that its anti-point and six quasi-points were, by necessity, also spinning, in infinite circles, out at an infinite distance away. Not too elegant. lol. But it was the best I could do. About 15 years later, Mathematica became available, so I was able to enter the equation into the program and plot it out. It plotted out as an arbitrarily small point with magnitude 1.0, just as it was designed to, which was gratifying to see. Phil Warnell said... This comment has been removed by the author. Phil Warnell said... This comment has been removed by the author. Phil Warnell said... Hi Stefan, Thanks once again for being as you so often are the heralder of good news. However, self admittedly this won’t be something I would find reason to use much, yet it certainly expands further what is available to anyone having access to the web. It has me to wonder if there ever will come a day where a physics researcher will be hired where in the section of their CV referring to education it will simply read “WWW”.? It sounds unlikely I know, yet perhaps this has more to do with our antiquated concepts of education catching up with our newly expanded potentials more than anything else. Bee said... Hi Luke, Maple/Mathematica will sometimes just tell you an integral doesn't converge or simply spit out the same integral as result, neither of which is helpful. I'd sometimes actually need to know for which cases it does when converge and rather than trusting Maple that it doesn't know an integral I'd rather look it up myself (doesn't happen too often though). Best,
4d4ec266ac72266b
Take the 2-minute tour × When I read descriptions of the many-worlds interpretation of quantum mechanics, they say things like "every possible outcome of every event defines or exists in its own history or world", but is this really accurate? This seems to imply that the universe only split at particular moments when "events" happen. This also seems to imply that the universe only splits into a finite number of "every possible outcome". I imagine things differently. Rather than splitting into a finite number of universes at discrete times, I imagine that at every moment the universe splits into an uncountably infinite number of universes, perhaps as described by the Schrödinger equation. Which interpretation is right? (Or otherwise, what is the right interpretation?) If I'm right, how does one describe such a vast space mathematically? Is this a Hilbert space? If so, is it a particular subset of Hilbert space? share|improve this question No one has any justifiable unique answers to such questions. The many-worlds interpretation isn't an actual theory of physics, an actual set of rules, ideas, or equations. It's just a vague and, when looked with any precision, meaningless and vacuous philosophical paradigm. Obviously, proper quantum mechanics doesn't imply any splitting whatsoever. Any rule when a splitting occurs is bound to be unnatural. The only "splitting" that proper QM allows is an approximate one, given by decoherence: the moment when the chances of parts of $\psi$ to "re-interfere" in the future are negligible. –  Luboš Motl Jul 21 '12 at 7:11 @LubošMotl your statement that "Obviously, proper quantum mechanics doesn't imply any splitting whatsoever." I don't really understand in this contex. They are not explaining splitting, but the state vector reduction/collapse of the wavefunction. I agree that the many-world interpretation is pysically flawed and has no mathematical basis as a theory. However, interpretations like the Many-Minds/multi-consciousness interpretation do. Moreover, this particular theory is complete, well defined and cannot be disprooved from a physical stand-point. Of course, this does not make it correct! –  Killercam Jul 21 '12 at 10:43 2 Answers 2 Many worlders won't tell you this dirty little secret but how often splitting happens, and how many worlds there are, depends upon the choice of coarse graining, and the coarse graining resolution. No, it's not possible to ramp up the coarse graining all the way to the finest levels because a decoherence/coherence threshold would be crossed. And no, there is no canonical coarse graining either. The preferred basis depends upon the environment. Always. What is the preferred basis for a closed self-contained universe? share|improve this answer A more accurate answer than "The preferred basis depends upon the environment. Always." would be that the supporters of the MWI haven't yet described any other mechanism by which it could arise, just as they haven't yet shown how the Born rule would emerge even for a finite system. –  Niel de Beaudrap Jul 21 '12 at 11:52 The Many Worlds interpretation is popularly misunderstood. The wave function itself contains a spectrum of universes, one corresponding to each eigenvalue for a given operator. The "splitting" of the "many worlds" is represented by the time evolution of the wave function described by the Schrodinger equation. As Lubos mentions above, these "universes" only become separate through decoherence. Consider, for example, a wave function in the position-basis given by a delta-function at x=0. This represents one universe. Now time-evolve the wave function using the schrodinger equation. The delta-function has now spread-out a bit. It is peaked at x=0, but has non-zero values at x=+1 and x=-1. This represents the existence of universes in which the position of the particle is at x=0, x=+1, and x=-1. In some sense there are "more" universes at x=0 than at x=+-1, because the wave function is more highly peaked at x=0. This is where some of the difficulty in the Many Worlds interpretation comes in: what ontology to use to describe the "splitting", "how many universes" are at x=0 vs x=+-1, and so on. The main point I want to make is that the "splitting" is just an interpretation of what is happening with the evolution of the wave function according to the schrodinger equation. Nothing "more" is actually happening. You model the "splitting" using the tried-and-true schrodinger evolution of the wave function. share|improve this answer You imply that there is a spectrum (a countable infinity) of "possible universes". But is it actually a continuum (an uncountable infinity) of "possible universes"? Can the delta-function have non-zero values at locations everywhere between 0 and +/-1? Or maybe a better example (since I don't understand the delta-function), in the double-slit experiment, can't a particular photon hit the detector plane at any point on the plane? (<- thus uncountable infinite possible universes) –  John Berryman Jul 21 '12 at 13:04 @John Berryman The word 'spectrum' does not imply a countable infinity. It is a continuum representing an uncountably infinite number of universes in the example I gave. You can think of a delta function like a very narrow spike. The schrodinger equation time-evolves a narrow spike into a wider and wider gaussian shape. In the example, in order to keep things simple, I approximated this as {-1,0,1) (a very rough approximation, but serves to illustrate the point). –  user1247 Jul 21 '12 at 19:05 Your Answer
0514ee6f7f4da7ac
Download Q and P college-physics-with-concept-coach-3.3 yes no Was this document useful for you?    Thank you for your participation! Document related concepts Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup Conservation of energy wikipedia, lookup Woodward effect wikipedia, lookup Lorentz force wikipedia, lookup Speed of gravity wikipedia, lookup Electrical resistance and conductance wikipedia, lookup Gravity wikipedia, lookup Weightlessness wikipedia, lookup Free fall wikipedia, lookup Electromagnetism wikipedia, lookup Work (physics) wikipedia, lookup Anti-gravity wikipedia, lookup Faster-than-light wikipedia, lookup Time in physics wikipedia, lookup Classical central-force problem wikipedia, lookup Newton's laws of motion wikipedia, lookup Mass versus weight wikipedia, lookup Electromagnetic mass wikipedia, lookup History of thermodynamics wikipedia, lookup Nuclear physics wikipedia, lookup Negative mass wikipedia, lookup Chapter 16 | Oscillatory Motion and Waves 40. A ladybug sits 12.0 cm from the center of a Beatles music album spinning at 33.33 rpm. What is the maximum velocity of its shadow on the wall behind the turntable, if illuminated parallel to the record by the parallel rays of the setting Sun? 51. Scouts at a camp shake the rope bridge they have just crossed and observe the wave crests to be 8.00 m apart. If they shake it the bridge twice per second, what is the propagation speed of the waves? 16.7 Damped Harmonic Motion 52. What is the wavelength of the waves you create in a swimming pool if you splash your hand at a rate of 2.00 Hz and the waves propagate at 0.800 m/s? 41. The amplitude of a lightly damped oscillator decreases by 3.0% during each cycle. What percentage of the mechanical energy of the oscillator is lost in each cycle? 16.8 Forced Oscillations and Resonance 42. How much energy must the shock absorbers of a 1200-kg car dissipate in order to damp a bounce that initially has a velocity of 0.800 m/s at the equilibrium position? Assume the car returns to its original vertical position. 43. If a car has a suspension system with a force constant of 5.00×10 4 N/m , how much energy must the car’s shocks remove to dampen an oscillation starting with a maximum displacement of 0.0750 m? 44. (a) How much will a spring that has a force constant of 40.0 N/m be stretched by an object with a mass of 0.500 kg when hung motionless from the spring? (b) Calculate the decrease in gravitational potential energy of the 0.500-kg object when it descends this distance. (c) Part of this gravitational energy goes into the spring. Calculate the energy stored in the spring by this stretch, and compare it with the gravitational potential energy. Explain where the rest of the energy might go. 45. Suppose you have a 0.750-kg object on a horizontal surface connected to a spring that has a force constant of 150 N/m. There is simple friction between the object and surface with a static coefficient of friction µ s = 0.100 . (a) How far 53. What is the wavelength of an earthquake that shakes you with a frequency of 10.0 Hz and gets to another city 84.0 km away in 12.0 s? 54. Radio waves transmitted through space at 3.00×10 8 m/s by the Voyager spacecraft have a wavelength of 0.120 m. What is their frequency? 55. Your ear is capable of differentiating sounds that arrive at the ear just 1.00 ms apart. What is the minimum distance between two speakers that produce sounds that arrive at noticeably different times on a day when the speed of sound is 340 m/s? 56. (a) Seismographs measure the arrival times of earthquakes with a precision of 0.100 s. To get the distance to the epicenter of the quake, they compare the arrival times of S- and P-waves, which travel at different speeds. Figure 16.48) If S- and P-waves travel at 4.00 and 7.20 km/s, respectively, in the region considered, how precisely can the distance to the source of the earthquake be determined? (b) Seismic waves from underground detonations of nuclear bombs can be used to locate the test site and detect violations of test bans. Discuss whether your answer to (a) implies a serious limit to such detection. (Note also that the uncertainty is greater if there is an uncertainty in the propagation speeds of the S- and P-waves.) can the spring be stretched without moving the mass? (b) If the object is set into oscillation with an amplitude twice the distance found in part (a), and the kinetic coefficient of friction is µ k = 0.0850 , what total distance does it travel before stopping? Assume it starts at the maximum amplitude. 46. Engineering Application: A suspension bridge oscillates with an effective force constant of 1.00×10 N/m . (a) How much energy is needed to make it oscillate with an amplitude of 0.100 m? (b) If soldiers march across the bridge with a cadence equal to the bridge’s natural frequency and impart 1.00×10 4 J of energy each second, how long does it take for the bridge’s oscillations to go from 0.100 m to 0.500 m 16.9 Waves 47. Storms in the South Pacific can create waves that travel all the way to the California coast, which are 12,000 km away. How long does it take them if they travel at 15.0 m/s? 48. Waves on a swimming pool propagate at 0.750 m/s. You splash the water at one end of the pool and observe the wave go to the opposite end, reflect, and return in 30.0 s. How far away is the other end of the pool? 49. Wind gusts create ripples on the ocean that have a wavelength of 5.00 cm and propagate at 2.00 m/s. What is their frequency? 50. How many times a minute does a boat bob up and down on ocean waves that have a wavelength of 40.0 m and a propagation speed of 5.00 m/s? Figure 16.48 A seismograph as described in above problem.(credit: Oleg Alexandrov) 16.10 Superposition and Interference 57. A car has two horns, one emitting a frequency of 199 Hz and the other emitting a frequency of 203 Hz. What beat frequency do they produce? 58. The middle-C hammer of a piano hits two strings, producing beats of 1.50 Hz. One of the strings is tuned to 260.00 Hz. What frequencies could the other string have? 59. Two tuning forks having frequencies of 460 and 464 Hz are struck simultaneously. What average frequency will you hear, and what will the beat frequency be?
695890638049f2ab
Download A quantum approach to condensed matter physics by Philip L. Taylor PDF By Philip L. Taylor This reader-friendly advent to the idea that underlies the various attention-grabbing homes of solids assumes simply an straightforward wisdom of quantum mechanics. Taylor and Heinonen describe the equipment for appearing calculations and making predictions of a few of the numerous complicated phenomena that ensue in solids and quantum drinks. Their e-book, geared toward complicated undergraduates and starting graduate scholars, leads the reader from the basic habit of electrons and atoms in solids to the main lately explored manifestations of the quantum nature of condensed topic. Show description Read or Download A quantum approach to condensed matter physics PDF Similar quantum theory books Introduction to Quantum Mechanics: Schrodinger Equation and Path Integral After a attention of easy quantum mechanics, this advent goals at an aspect through aspect therapy of primary functions of the Schrödinger equation at the one hand and the purposes of the trail essential at the different. varied from conventional texts and utilizing a scientific perturbation approach, the answer of Schrödinger equations comprises additionally people with anharmonic oscillator potentials, periodic potentials, screened Coulomb potentials and a standard singular strength, in addition to the research of the big order habit of the perturbation sequence. The Stability of Matter in Quantum Mechanics Learn into the soundness of topic has been probably the most winning chapters in mathematical physics, and is a first-rate instance of the way smooth arithmetic might be utilized to difficulties in physics. a different account of the topic, this ebook offers a whole, self-contained description of study at the balance of topic challenge. The Conceptual Framework of Quantum Field Theory The ebook makes an attempt to supply an advent to quantum box concept emphasizing conceptual concerns usually overlooked in additional "utilitarian" remedies of the topic. The booklet is split into 4 elements, entitled respectively "Origins", "Dynamics", "Symmetries", and "Scales". The emphasis is conceptual - the purpose is to construct the idea up systematically from a few truly said foundational innovations - and accordingly to a wide quantity anti-historical, yet historic Chapters ("Origins") are incorporated to situate quantum box conception within the better context of contemporary actual theories. Absolute Radiometry. Electrically Calibrated Thermal Detectors of Optical Radiation Absolute Radiometry: Electrically Calibrated Thermal Detectors of Optical Radiation considers the applying of absolute radiometry, a method hired in optical radiation metrology for absolutely the size of radiant strength. This e-book consists of 8 chapters and starts with the rules of absolutely the size of radiant strength. Extra resources for A quantum approach to condensed matter physics Sample text S) for ~,(a), we finally get 1 [ /rg(a)+H(a) xh = M Jao Jr=~(a) g(r, f~) [rg(Ft) - r] e~(f~)r2drda. 1o) To estimate the magnitude of the vector xhT, we approximate the density ~(r, fl) of topographical masses by a mean value ~o and the radius rg(f~) of Chapter 2 30 the geoid by a mean radius R of the Earth. 67 g/cm 3. 14) o(a) , where Ylm(f~) are the first-degree spherical harmonics normalized according to Varshalovich et al. ~2t(~R~(H2)11 , where Re and Im stand for the real and imaginary part of a complex number. 0001 ° to ¢ = 1°. This confirms the well known-fact that the gravitational potential of topographical masses of a finite thickness behaves like the potential of a thin layer when it is observed from a larger distance. The integration over the full solid angle f~t may be thus restricted to a small area (of radius ¢0) surrounding the computation point. A question arises how large the integration radius ¢0 should be chosen in order to keep a prescribed accuracy of topographical effects. 5 mgal in computing the direct topographical effect on gravity it is sufficient to integrate over a spherical cap of radius of 3 °. R =/~-t-/:/(~,t,) ~-(1+3cos¢) ( ~/eo~+H2(a')-~/eg+H~(O)+ ) R2 el +-2-(3 cos ~ ¢ - 1) In 2. 4), we may neglect the term i~/2R 2 with respect to 4 getting 1+3cos~=4-O(1× 10 -4 ) . 8) Within the same accuracy, we may further write 3 cos2¢ - 1 - 2 . 1). 9), we get ~-1' n . /\[R+H(~2') the planar approximation of the function z~- ~n, y;, r )[~'=R+H(~)in the form + R 2 In 2R + H(fY) + \/go2 + H2(f~') . 4)~ the indirect topographical effect on potential may be approximated as 5V(R, gl) -~ -27rG~0H2(f~) + GR2 go fa,o + H(a') + V/g0~ + H2(f~') + In 2R ~ + H ( a ) + ~/e~ + g ~ ( a ) 2 ~/'~ + H 2 ( I T ) - V/g°2+ H2(f~) R H(fY) - H(f~)' dO' . Download PDF sample Rated 4.46 of 5 – based on 22 votes
6812fdb52ce753bb
What does the Pauli Exclusion Principle mean if time and space are continuous? Assuming time and space are continuous, identical quantum states seem impossible even without the principle. I guess saying something like: the closer the states are the less likely they are to exist, would make sense, but the principle is not usually worded that way, it's usually something along the lines of: two identical fermions cannot occupy the same quantum state • 5 $\begingroup$ Your assertion in the second sentence needs to be backed up with physics, not gut feeling. $\endgroup$ – Jon Custer Oct 25 '16 at 13:02 • 22 $\begingroup$ bound systems come with discrete states even in continous spacetimes $\endgroup$ – Christoph Oct 25 '16 at 13:04 • 6 $\begingroup$ @JonCuster no it doesn't, because that's the point of the question. $\endgroup$ – Nathaniel Oct 26 '16 at 10:12 • 1 $\begingroup$ My understanding is that is is more like "two identical fermions within the same local system (same atom, same molecule ...) cannot occupy the same quantum state". In other words, the Pauli Exclusion Principle only applies to small scale things, and certainly not to things of cosmic scale. $\endgroup$ – Kevin Fegan Oct 26 '16 at 10:50 • 1 $\begingroup$ @KevinFegan but what is a "local system"? In white dwarves for example, probably one of the systems in which Pauli's principle has the strongest effect, the electrons are effectively in a (degenerate) gas state, so not bound by any atom, molecule, or anything else. Unless you want in that case to consider the whole star as the "local system", but that's a bit of a stretch I think (can't you also consider the whole universe as a "local system"?) $\endgroup$ – glS Oct 26 '16 at 19:22 The other answer shows nicely how one may interpret the Pauli exclusion principle for actual wavefunctions. However, I want to address the underlying confusion here, encapsulated in the statement If time and space are continuous then identical quantum states are impossible to begin with. in the question. This assertion is just plainly false. A quantum state is not given by a location in time and space. The often used kets $\lvert x\rangle$ that are "position eigenstates" are not actually admissible quantum states since they are not normalized - they do not belong to the Hilbert space of states. Essentially by assumption, the space of states is separable, i.e. spanned by a countably infinite orthonormal basis. The states the Pauli exclusion principle is usually used for are not position states, but typically bound states like the states in a hydrogen-like atom, which are states $\lvert n,\ell,m_\ell,s\rangle$ labeled by four discrete quantum numbers. The exclusion principle says now that only one fermion may occupy e.g. the state $\lvert 1,0,0,+1/2\rangle$, and only one may occupy $\lvert 1,0,0,-1/2\rangle$. And then all states at $n=1$ are exhausted, and a third fermion must occupy a state of $n > 1$, i.e. it must occupy a state of higher energy. This is the point of Pauli's principle, which has nothing to do with the discreteness or non-discreteness of space. (In fact, since the solution to the Schrödinger equation is derived as the solution to a differential equation in continuous space, we see that non-discrete space does not forbid "discrete" states.) • 4 $\begingroup$ This is a really important point: physicists do this clever trick where you quietly assume that you can use an uncountable basis, and you can't actually do that because the underlying maths falls apart horribly. $\endgroup$ – tfb Oct 25 '16 at 13:48 • 2 $\begingroup$ One lingering point of uncertainty here is that the Pauli exclusion principle is invoked for more than just the levels of hydrogen. For example, for why you don't fall through the floor (example site only) - there's a hand-wavy "electrons can't occupy the same quantum state, so they can't occupy the same 'spot', so therefore they can't occupy the same space, hence the electron clouds of your feet don't fall through those of the floor." There's an implicit assumption that location equals quantum state there. $\endgroup$ – R.M. Oct 25 '16 at 16:03 • 5 $\begingroup$ @JgL Note the "uncountable" in tfb's comment. The $\lvert x\rangle$ really don't form a basis of the Hilbert space in the standard mathematical sense - any Hilbert/Schauder basis should be countable, and Hamel bases are rather useless. $\endgroup$ – ACuriousMind Oct 25 '16 at 16:36 • 1 $\begingroup$ What about time though? If time is continuous wouldn't simultaneity be impossible? $\endgroup$ – Yogi DMT Oct 25 '16 at 17:53 • 1 $\begingroup$ @JgL I didn't say infinite-dimensional, I said uncountable, which is a very different thing: infinite but countable bases are one thing, infinite but uncountable ones are very different. Sliding between these two things without being clear about it the 'clever physicist's trick' I referred to, and is a mathematical horror. $\endgroup$ – tfb Oct 25 '16 at 19:01 Real particles are never completely localised in space (well except in the limit case of a completely undefined momentum), due to the uncertainty principle. Rather, they are necessarily in a superposition of a continuum of position and momentum eigenstates (a wave packet). Pauli's Exclusion Principle asserts that they cannot be in the same exact quantum state, but a direct consequence of this is that they tend to also not be in similar states. This amounts to an effective repulsive effect between particles. You can see this by remembering that to get a physical two-fermion wavefunction you have to antisymmetrize it. This means that if the two single wavefunctions are similar in a region, the total two-fermion wavefunction will have nearly zero probability amplitude in that region, thus resulting in an effective repulsive effect. To more clearly see this consider the simple 1-dimensional case, and two fermionic particles with partially overlapping wavefunctions. Let's call the wavefunction of the first and second particle $\psi_A(x)$ and $\psi_B(x)$, respectively: The properly antisymmetrized wavefunction of the two fermions will be given by: $$ \Psi(x_1,x_2) = \frac{1}{\sqrt2}\left[ \psi_A(x_1) \psi_B(x_2)- \psi_A(x_2) \psi_B(x_1) \right]. $$ For any pair of values $x_1$ and $x_2$, $\lvert\Psi(x_1,x_2)\rvert^2$ gives the probability of finding one particle in the position $x_1$ and the other particle in the position $x_2$. Plotting $\lvert\Psi(x_1,x_2)\rvert^2$ we get the following: As you can clearly see for this picture, for $x_1=x_2$ the probability vanishes, as an immediate consequence of Pauli's exclusion principle: you cannot find the two identical fermions in the same position state. But you also see that the more $x_1$ is close to $x_2$ the smaller is the probability, as it must be due to the wavefunction being continuous. Addendum: Can the effect of Pauli's exclusion principle be thought of as a force in the conventional $F=ma$ sense? The QM version of what is meant by force in the classical setting is an interaction mediated by some potential, like the electromagnetic interaction between electrons. This is in practice an additional term in the Hamiltonian of the system, which says that certain states (say, same charges very close together) correspond to high-energy states and are therefore harder to reach, and vice versa for low-energy states. Pauli's exclusion principle is conceptually entirely different: it is not due to an increase of energy associated with identical fermions being close together, and there is no term in the Hamiltonian that mediates such "interaction" (important caveat here: this "exchange forces" can be approximated to a certain degree as "regular" forces). Rather, it comes from the inherently different statistics of many-fermion states: it is not that identical fermions cannot be in the same state/position because there is a repulsive force preventing it, but that there is no physical (many-body) state associated with them being in the same state/position. There simply isn't: it's not something compatible with the physical reality described by quantum mechanics. We naively think of such states because we are used to think classically and cannot really wrap our heads around what the concept of "identical particles" really means. Ok, but what about things like degeneracy pressure then? In some circumstances, like in dying stars, Pauli's exclusion principle really seems to behave like a force in the conventional sense, contrasting the gravitational force and preventing white dwarves from collapsing into a point. How do we reconcile the above described "statistical effect" with this? What I think is a good way of thinking about this is the following: you are trying to squish a lot of fermions into the same place. However, Pauli's principle dictates a vanishing probability of any pair of them occupying the same position. The only way to reconcile these two things is that the position distribution of any fermion (say, the $i$-th fermion) must be extremely localised at a point (call it $x_i$), different from all the other points occupied by the other fermions. It is important to note that I just cheated for the sake of clarity here: you cannot talk of any fermion as having an individual identity: any fermion will be very strictly confined in all the $x_i$ positions, provided that all the other fermions are not. The net effect of all this is that the properly antisymmetrized wavefunction of the whole system will be a superposition of lots of very sharp peaks in the high dimensional position space. And it is at this point that Heisenberg's uncertainty comes into play: very peaked distribution in position means very broad distribution in the momentum, which means very high energy, which means that the more you want to squish the fermions together, the more energy you need to provide (that is, classical speaking, the harder you have to "push" them together). To summarize: due to Pauli's principle the fermions try so hard to not occupy the same positions, that the resulting many-fermion wavefunction describing the joint probabities becomes very peaked, highly increasing the kinetic energy of the state, thus making such states "harder" to reach. Here (and links therein) is another question discussing this point. • $\begingroup$ Extending this a bit, the phrase the closer the states are the less likely they are to exist is expressed mathematically for multi-particle systems by the pair correlation function. So while the wording isn't exactly as the OP phrased it, the content is expressed. $\endgroup$ – garyp Oct 25 '16 at 13:22 • 1 $\begingroup$ Thanks, this is a good answer as well. I only chose the other one because it addressed my particular misunderstanding a little bit more. $\endgroup$ – Yogi DMT Oct 25 '16 at 14:37 • $\begingroup$ One thing I've often wondered about this is: does the effective repulsiveness constitute a force in the normal F=ma sense of the word? And why don't we include this force in the list of fundamental forces? $\endgroup$ – spraff Oct 26 '16 at 11:39 • $\begingroup$ @spraff I think that is a very interesting question. Edited post to try to answer it (but see also the many other questions on that topic) $\endgroup$ – glS Oct 26 '16 at 13:57 • 3 $\begingroup$ As a non-physicist (math/cs) this is probably the best and most helpful explanation I have ever read for explaining the PEP and particularly it's implied apparent continuous/discrete paradox. The 3D graph was especially helpful, thnx. $\endgroup$ – RBarryYoung Oct 26 '16 at 18:18 protected by Qmechanic Oct 25 '16 at 20:01 Would you like to answer one of these unanswered questions instead?
11c3a25331079d5f
 How are anyons possible? (another version) | PhysicsOverflow • Register Please help promote PhysicsOverflow ads elsewhere if you like it. New printer friendly PO pages! Migration to Bielefeld University was successful! Please vote for this year's PhysicsOverflow ads! ... see more Tools for paper authors Submit paper Claim Paper Authorship Tools for SE users Search User Reclaim SE Account Request Account Merger Nativise imported posts Claim post (deleted users) Import SE post Public \(\beta\) tools Report a bug with a feature Request a new functionality 404 page design Send feedback (propose a free ad) Site Statistics 173 submissions , 136 unreviewed 4,271 questions , 1,618 unanswered 5,069 answers , 21,535 comments 1,470 users with positive rep 623 active unimported users More ...   How are anyons possible? (another version) + 7 like - 0 dislike I know that this question has been submitted several times (especially see How are anyons possible?), even as a byproduct of other questions, since I did not find any completely satisfactory answers, here I submit another version of the question, stated into a very precise form using only very elementary general assumptions of quantum physics. In particular I will not use any operator (indicated by $P$ in other versions) representing the swap of particles. Assume to deal with a system of a couple of identical particles, each moving in $R^2$. Neglecting for the moment the fact that the particles are indistinguishable, we start form the Hilbert space $L^2(R^2)\otimes L^2(R^2)$, that is isomorphic to $L^2(R^2\times R^2)$. Now I divide the rest of my issue into several elementary steps. (1) Every element $\psi \in L^2(R^2\times R^2)$ with $||\psi||=1$ defines a state of the system, where $|| \cdot||$ is the $L^2$ norm. (2) Each element of the class $\{e^{i\alpha}\psi\:|\; \psi\}$ for $\psi \in L^2(R^2\times R^2)$ with $||\psi||=1$ defines the same state, and a state is such a set of vectors. (3) Each $\psi$ as above can be seen as a complex valued function defined, up to zero (Lebesgue) measure sets, on $R^2\times R^2$. (4) Now consider the "swapped state" defined (due to (1)) by $\psi' \in L^2(R^2\times R^2)$ by the function (up to a zero measure set): $$\psi'(x,y) := \psi(y,x)\:,\quad (x,y) \in R^2\times R^2$$ (5) The physical meaning of the state represented by $\psi'$ is that of a state obtained form $\psi$ with the role of the two particles interchanged. (6) As the particles are identical, the state represented by $\psi'$ must be the same as that represented by $\psi$. (7) In view of (1) and (2) it must be: $$\psi' = e^{i a} \psi\quad \mbox{for some constant $a\in R$.}$$ Here physics stops. I will use only mathematics henceforth. (8) In view of (3) one can equivalently re-write the identity above as $$\psi(y,x) = e^{ia}\psi(x,y) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [1]\:.$$ (9) Since $(x,y)$ in [1] is every pair of points up to a zero-measure set, I am allowed to change their names obtaining $$\psi(x,y) = e^{ia}\psi(y,x) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [2]$$ (Notice the zero measure set where the identity fails remains a zero measure set under the reflexion $(x,y) \mapsto (y,x)$, since it is an isometry of $R^4$ and Lebesgues' measure is invariant under isometries.) (10) Since, again, [2] holds almost everywhere for every pair $(x,y)$, I am allowed to use again [1] in the right-hand side of [2] obtaining: $$\psi(x,y) = e^{ia}e^{ia}\psi(x,y) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\:.$$ (This certainly holds true outside the union of the zero measure set $A$ where [1] fails and that obtained by reflexion $(x,y) \mapsto (y,x)$ of $A$ itself.) (11) Conclusion: $$[e^{2ia} -1] \psi(x,y)=0 \qquad\mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [3]$$ Since $||\psi|| \neq 0$, $\psi$ cannot vanish everywhere on $R^2\times R^2$. If $\psi(x_0,y_0) \neq 0$, $[e^{2ia} -1] \psi(x_0,y_0)=0$ implies $e^{2ia} =1 $ and so: $$e^{ia} = \pm 1\:.$$ And thus, apparently, anyons are not permitted. Where is the mistake? ADDED REMARK. (10) is a completely mathematical result. Here is another way to obtain it. (8) can be written down as $\psi(a,b) = e^{ic} \psi(b,a)$ for some fixed $c \in R$ and all $(a,b) \in R^2 \times R^2$ (I disregard the issue of negligible sets). Choosing first $(a,b)=(x,y)$ and then $(a,b)=(y,x)$ we obtain resp. $\psi(x,y) = e^{ic} \psi(y,x)$ and $\psi(y,x) = e^{ic} \psi(x,y)$. They immediately produce [3] $\psi(x,y) = e^{i2c} \psi(x,y)$. So the physical argument (4)-(7) that we have permuted again the particles and thus a further new phase may appear does not apply here. 2nd ADDED REMARK. It is clear that as soon as one is allowed to write $\psi(x,y) = \lambda \psi(y,x)$ for a constant $\lambda\in U(1)$ and all $(x,y) \in R^2\times R^2$ the game is over: $\lambda$ turns out to be $\pm 1$ and anyons are forbidden. This is just mathematics however. My guess for a way out is that the true configuration space is not $R^2\times R^2$ but some other space whose $R^2 \times R^2$ is the universal covering. An idea (quite rough) could be the following. One should assume that particles are indistinguishable from scratch already defining the configuration space, that is something like $Q := R^2\times R^2/\sim$ where $(x',y')\sim (x,y)$ iff $x'=y$ and $y'=x$. Or perhaps subtracting the set $\{(z,z)\:|\: z \in R^2\}$ to $R^2\times R^2$ before taking the quotient to say that particles cannot stay at the same place. Assume the former case for the sake of simplicity. There is a (double?) covering map $\pi : R^2 \times R^2 \to Q$. My guess is the following. If one defines wavefunctions $\Psi$ on $R^2 \times R^2$, he automatically defines many-valued wavefunctions on $Q$. I mean $\psi:= \Psi \circ \pi^{-1}$. The problem of many values physically does not matter if the difference of the two values (assuming the covering is a double one) is just a phase and this could be written, in view of the identification $\sim$ used to construct $Q$ out of $R^2 \times R^2$: $$\psi(x,y)= e^{ia}\psi(y,x)\:.$$ Notice that the identity cannot be interpreted literally because $(x,y)$ and $(y,x)$ are the same point in $Q$, so my trick for proving $e^{ia}=\pm 1$ cannot be implemented. The situation is similar to that of $QM$ on $S^1$ inducing many-valued wavefunctions form its universal covering $R$. In that case one writes $\psi(\theta)= e^{ia}\psi(\theta + 2\pi)$. 3rd ADDED REMARK I think I solved the problem I posted focusing on the model of a couple of anyons discussed on p.225 of this paper matwbn.icm.edu.pl/ksiazki/bcp/bcp42/bcp42116.pdf suggested by Trimok. The model is simply this one: $$\psi(x,y):= e^{i\alpha \theta(x,y)} \varphi(x,y)$$ where $\alpha \in R$ is a constant, $\varphi(x,y)= \varphi(y,x)$, $(x,y) \in R^2 \times R^2$ and $\theta(x,y)$ is the angle with respect to some fixed axis of the segment $xy$. One can pass to coordinates $(X,r)$, where $X$ describes the center of mass and $r:= y-x$. Swapping the particles means $r\to -r$. Without paying attention to mathematical details, one sees that, in fact: $$\psi(X,-r)= e^{i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{i \alpha \pi} \psi(y,x)\quad (A)$$ for an anti clock wise rotation. (For clock wise rotations a sign $-$ appears in the phase, describing the other element of the braid group $Z_2$. Also notice that, for $\alpha \pi \neq 0, 2\pi$ the function vanishes for $r=0$, namely $x=y$, and this corresponds to the fact that we removed the set $C$ of coincidence points $x=y$ from the space of configurations.) However a closer scrutiny shows that the situation is more complicated: The angle $\theta(r)$ is not well defined without fixing a reference axis where $\theta =0$. Afterwards one may assume, for instance, $\theta \in (0,2\pi)$, otherwise $\psi$ must be considered multi-valued. With the choice $\theta(r) \in (0,2\pi)$, (A) does not hold everywhere. Consider an anti clockwise rotation of $r$. If $\theta(r) \in (0,\pi)$ then (A) holds in the form $$\psi(X,-r)= e^{+ i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{+ i \alpha \pi} \psi(y,x)\quad (A1)$$ but for $\theta(r) \in (\pi, 2\pi)$, and always for a anti clockwise rotation one finds $$\psi(X,-r)= e^{-i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{- i \alpha \pi} \psi(y,x)\quad (A2)\:.$$ Different results arise with different conventions. In any cases it is evident that the phase due to the swap process is a function of $(x,y)$ (even if locally constant) and not a constant. This invalidate my "no-go proof", but also proves that the notion of anyon statistics is deeply different from the standard one based on the groups of permutations, where the phases due to the swap of particles is constant in $(x,y)$. As a consequence the swapped state is different from the initial one, differently form what happens for bosons or fermions and against the idea that anyons are indistinguishable particles. [Notice also that, in the considered model, swapping the initial pair of bosons means $\varphi(x,y) \to \varphi(y,x)= \varphi(x,y)$ that is $\psi(x,y)\to \psi(x,y)$. That is, swapping anyons does not mean swapping the associated bosons, and it is correct, as it is another physical operation on different physical subjects.] Alternatively one may think of the anyon wavefunction $\psi(x,y)$ as a multi-valued one, again differently from what I assumed in my "no-go proof" and differently from the standard assumptions in QM. This produces a truly constant phase in (A). However, it is not clear to me if, with this interpretation the swapped state of anyons is the same as the initial one, since I never seriously considered things like (if any) Hilbert spaces of multi-valued functions and I do not understand what happens to the ray-representation of states. This picture is physically convenient, however, since it leads to a tenable interpretation of (A) and the action of the braid group turns out to be explicit and natural. Actually a last possibility appears. One could deal with (standard complex valued) wavefunctions defined on $(R^2 \times R^2 - C)/\sim$ as we know (see above, $C$ is the set of pairs $(x,y)$ with $x=y$) and we define the swap operation in terms of phases only (so that my "no-go proof" cannot be applied and the transformations do not change the states): $$\psi([(x,y)]) \to e^{g i\alpha \pi}\psi([(x,y)])$$ where $g \in Z_2$. This can be extended to many particles passing to the braid group of many particles. Maybe it is convenient mathematically but is not very physically expressive. In the model discussed in the paper I mentioned, it is however evident that, up to an unitary transformation, the Hilbert space of the theory is nothing but a standard bosonic Hilbert space, since the considered wavefunctions are obtained from those of that space by means of a unitary map associated with a singular gauge transformation, and just that singularity gives rise to all the interesting structure! However, in the initial bosonic system the singularity was pre-existent: the magnetic field was a sum of Dirac's delta. I do not know if it makes sense to think of anyons independently from their dynamics. And I do not know if this result is general. I guess that moving the singularity form the statistics to the interaction and vice versa is just what happens in path integral formulation when moving the external phase to the internal action, see Tengen's answer. This post imported from StackExchange Physics at 2014-04-11 15:20 (UCT), posted by SE-user V. Moretti asked Dec 17, 2013 in Theoretical Physics by Valter Moretti (2,025 points) [ no revision ] 4 Answers + 4 like - 0 dislike The answer can probably be summarized in two points: 1) As discussed in a beautiful paper by Leinaas and Myrheim, the configuration space of a system of $N$ identical particles in $n$ dimensions is not $\mathbb{R}^{nN}$, but $\mathbb{R}^{nN}/S_N$ where we mod out the action of the permutation group $S_N$ (and also remove the singularities that happen when two particles occupy the same point). 2) Quantum mechanics is not about functions from the configuration space to the complex numbers, $\psi : \mathbb{R}^{nN}/S_N \to \mathbb{C}$, but, in modern terms, about sections in vector bundles on the configuration space. In classic terms, one would argue that the phase of the wave function is unobservable, and hence can be multi-valued, as discussed in Dirac's paper on magnetic monopoles. It turns out that in $n=2$ dimensions, the configuration space of identical particles supports not just two, but many different interesting vector bundles, which corresponds to anyons. answered Apr 16, 2016 by Greg Graviton (775 points) [ no revision ] + 4 like - 0 dislike In quantum mechanics on nonsimply connected spaces, we can use wave functions living on the universal covering space. In order to see that, we must remember that the symplectic potential of identical particles must include an extra piece to the free particle symplectic potential given by a flat connection represented by a magnetic vector potential. This also happens for example in the Aharonov-Bohm case. In the two dimensional identical particle case where anyons exist, this vector potential can only have the following form: $$\mathbf{A} =\frac{\theta}{2 \pi} \frac{(-y, x )}{\left | x-y \right |^2}$$ where $ \mathbb{r_{12}} = (x,y)$ is the relative particle position. As a form this vector potential is closed but not exact Please see, for example, Wu's article:  http://content.lib.utah.edu/utils/getfile/collection/uspace/id/4293/filename/3674.pdf The solution of the Schrödinger equation with this type of magnetic potential is equivalent to taking following solution without the vector potential: $$\Psi(z_1, z_2) = (z_1-z_2)^{\frac{\theta}{2\pi}}\psi(z_1, z_2)$$ where $z_{1,2}$ are complex coordinates on the plane and $\psi(z_1, z_2)$ single valued. (Please see Wu for the details). Actually, the 2D case is actually conceptually simpler than in higher dimensions, where only bosons and fermions exist, because in higher dimensions the flat connection is torsion and has no vector potential representative in the de-Rham cohomology, but only in the Čech cohomology. answered Apr 17, 2016 by David Bar Moshe (4,095 points) [ no revision ] + 3 like - 0 dislike The best way to answer the question "How are anyons possible" is to use the "dynamical" path integral formalism, rather than the "static" wave function formalism. The permutation group action on the wave function is "static" in the sense that only initial and final states are specified. It will be ambiguous if there are more than one non-equivalent ways to perform the exchange process, which is the key for the "possibility" of anyons. Consider the amplitude from the initial state $|i\rangle$ to final state $|f\rangle$ in the path integral formalism $$\langle f|i\rangle = \int_\gamma \mathcal{D}x(t) e^{iS[x(t)]},$$ where $\gamma$ is a path from the initial configuration to the final configuration (they are set to the same). The confituration manifold will be discussed later. When two paths $\gamma_1$ and $\gamma_2$ are not equivalent to each other homotopically, we can assign a phase factor $e^{i\theta([\gamma])}$ to the path integral amplitude for each homotopy class $[\gamma]$: $$\langle f|i\rangle = \sum_{[\gamma]\in \pi_1(M)} e^{i\theta([\gamma])}\int_\gamma \mathcal{D}x(t) e^{iS[x(t)]},$$ where $\pi_1(M)$ denotes the fundamental group of the configuration space $M$. The phase factors $\{e^{i\theta([\gamma])}\}$ form an one dimensional representation of the group $\pi_1(M)$ because of the multiplication property of the propagator: $\langle f|i\rangle=\sum_m \langle f|m\rangle\langle m|i\rangle$. If we absorb the phase $\theta$ to the action $S$, it will be called a topological term as it depends only on the homotopy class. The next task is to calculate the one dimensional representation of the fundamental group of the configuration space. For $N$ identical particles in $d$ space dimension, the configration space is $M=(\mathbb R^{Nd}\backslash D)/S_N$, where $D=\{(r_1,...,r_N)|\ \exists i\neq j,\ s.t. r_i=r_j\}$ is the space where two particles occupy the same point, and "$/S_N$" means the order of the particles is neglected. (1) $d=1$. No exchange process can happen, and the notion of statistics is meaningless. (2) $d=2$. $\pi_1(M)=B_N$ is the braiding group. The one dimension representation of $B_N$ is characterized by an angle $\theta$ which corresponds to the statistical angle of the Abelian anyon. (3) $d\geq 3$. $\pi_1(M)=S_N$ is the permutation group. It means that, we only need to specify the order of particles in the initial and final states, to determine which homotopy class the path $\gamma$ belongs to. Therefore, only in this case, the wave function formalism can be used without ambiguity. To describe the non-Abelian anyons, one only need to replace the phase factor $e^{i\theta}$ by an unitary matrix. The result is that non-Abelian anyons are determined by the higher dimension representations of the fundamental group of the configuration space. answered Dec 19, 2013 by Tengen (105 points) [ no revision ] May I inquire you how to prove the claim "$d=2\ \pi_1(M)=B_N$" and "$d=3\ \pi_1(M)=S_N$" + 1 like - 0 dislike The point $(11)$ is not correct, by doing $2$ successive "exchanges", you may have a global phase factor, such as $\psi'(x,y) = e^{i\alpha}\psi(x,y)$. The two wave functions describe the same physical state. The correct considerations are topological, inside of considering a discrete operation, consider a continuous operation, so that it is equivalent to keep one particle fixed, and to make a rotation of the other particle of $2\pi$, the solution is in fact to look at the fundamental group ($1$st homotopy group) of $SO(d)$, where $d$ is the number of spatial dimensions (we suppose here only one time dimension). The structure and the dimension of the fundamental group (the number of different classes of paths) is correlated to the number of possible statistics. Of course, the fundamental group for $SO(d)$ with $d\geq 3$ is $\mathbb Z_2$, while the fundamental group of $SO(2)$ is $\mathbb Z$. This explains why we found different statistics (anyons) in $2$ spatial dimensions. answered Dec 17, 2013 by Trimok (950 points) [ no revision ] Your answer Live preview (may slow down editor)   Preview Your name to display (optional): Anti-spam verification: user contributions licensed under cc by-sa 3.0 with attribution required Your rights
6d5a15cf1230f98e
If television were the prime cause of illiteracy, then the remedy would be simple: Turn it off. Anticipation – A Spooky Computation Conference on Computing Anticipatory Systems (CASYS 99), Liege, Belgium, August 8-11, 1999 Mihai Nadin Program in Computational Design University of Wuppertal Computer Science, Center for the Study of Language and Information 201 Cordura Hall Stanford University Robert Rosen, in memoriam As the subject of anticipation claims its legitimate place in current scientific and technological inquiry, researchers from various disciplines (e.g., computation, artificial intelligence, biology, logic, art theory) make headway in a territory of unusual aspects of knowledge and epistemology. Under the heading anticipation, we encounter subjects such as preventive caching, robotics, advanced research in biology (defining the living) and medicine (especially genetically transmitted disease), along with fascinating studies in art (music, in particular). These make up a broad variety of fundamental and applied research focused on a controversial concept. Inspired by none other than Einstein–he referred to spooky actions at distance, i.e., what became known as quantum non-locality–the title of the paper is meant to submit my hypothesis that such processes are related to quantum non-locality. The second goal of this paper is to offer a cognitive framework–based on my early work on mind processes (1988)–within which the variety of anticipatory horizons invoked today finds a grounding that is both scientifically relevant and epistemologically coherent. The third goal of this paper is to identify the broad conceptual categories under which we can identify progress made so far and possible directions to follow. The fourth and final goal is to submit a co-relation view of anticipation and to integrate the inclusive recursion in a logic of relations that handles co-relations. Keywords: auto-suggestive memory, co-relation, non-locality, quantum semiotics, self-constitution, interactive computation 1 Introduction Anticipation could become the new frontier in science. Trends, scientific fashions, and priority funding programs succeed one another rapidly in a society that experiences a dynamics of change reflected in ever shorter cycles of discovery, production, and consumption. Frontiers mark stark discontinuities that ascertain fundamentally new knowledge horizons. Einstein stated, “No problem can be solved from the same consciousness that created it. We must learn to see the world anew.” It is in this respect that I find it extremely important to begin by putting the entire effort into a broad perspective. 2 The Philosophic Foundation of Anticipation is Not Trivial Philosophical considerations cannot be avoided (provided that they are not pursued as a means in themselves). Robert Rosen (1985) quoted David Hawkins, “Philosophy may be ignored but not escaped.” Rosen, whose work deserves to be integrated in current scientific dialog more than was been the case until his untimely death, understood this thought very well. Anticipation bears a heavy burden of interpretations. As initial attempts (Rosen, 1985; Nadin, 1988; Dubois, 1992) to recover the concept and to give it a scientific foundation prove, the task is difficult. We face here the dominant deterministic view inspired by a model of the universe in which a net distinction between cause and effect can be made. We also face a reductionist understanding of the world, which claims that physics is paradigmatic for everything else. Moreover, we are captive to an understanding of time and space that corresponds to the mathematical descriptions of the physical world: Time is uniquely defined along the arrow from past to future; space is homogeneous. Finally, we are given to the hope that science leads to laws on whose basis we may make accurate predictions. Once we accept these laws, anticipation can at best be accepted as one of these predictions, but not as a scientific endeavor on its own terms. A clear image of the difficulties in establishing this foundation results from revisiting Rosen’s work on anticipatory systems, above all his fundamental work, Life Itself (1991). Indeed, his rigorous argumentation, based on solid mathematical work and on a grounding in biology second to none among his peers, makes sense only against the background of the philosophic considerations set forth in his writings. It might not matter to a programmer whether Aristotle’s causa finalis (final cause) can be ascertained or justified, or deemed as passé and unacceptable. A programmer’s philosophy does not directly affect lines of code; neither do disputes among those partial to a certain world view. What is affected is the general perspective, i.e., the understanding of a program’s meaning. If the program displays characteristics of anticipation, the philosophic grounding might affect the realization that within a given condition–such as embodied in a machine–the simulation of anticipatory features should not be construed as anticipation per se. The philosophic foundation is also a prerequisite for defining how far the field can be extended without ending up in a different cognitive realm. Regarding this aspect, it is better to let those trying to expand the inquiry of anticipation–let me mention again Dubois (since 1996) and the notions of incursion and hyperincursion, Holmberg (since 1997) and space aspects–express themselves on the matter. Van de Vijver (1997), among few others (cf. CASYS 98 and the contributions listed in the Program for CASYS 99) has already attempted to shed light on what seems philosophically pertinent to the subject. She is right in stating that the global/local relation more adequately pertains to anticipation than does the pair particular/universal. The practical implications of this observation have not yet been defined. From my own perspective–based on pragmatics, which means grounding in the practical experience through which humans become what they are–anticipation corresponds to a characteristic of live beings as they attain the condition at which they constitutes their own nature. At this level, predictive models of themselves become possible, and progressively necessary. The thematization of anticipation, which as far as we know is a human being’s expression of self-awareness and connectedness, is only one aspect of this stage in the unfolding of our species. According to the premise of this perspective, pragmatics–expressed in what we do and how and why we do what we do–is where our understanding of anticipation originates. This is also where it returns, in the form of optimizing our actions, including those of defining what these actions should be, what sequence they follow, and how we evaluate them. All these are projections against a future towards which each of us is moving, all tainted by some form of finality (telos), or at least by its less disputed relative called intentionality. The generic why of our existence is embedded in this intentionality. The source of this finality are the others, those we interact with either in cooperating or in competing, or in a sense of belonging, which over time allowed for the constitution of the identity called humanness. Gordon Pask (1980), the almost legendary cybernetician, called such an entity a cognitive system. 2.1 Self-Entailment and Anticipation In a dialog on entailment (cf. http://views.vcu.edu/complex)–a fundamental concept in Rosen’s explanation of anticipation–a line originating with François Jacob was dropped: “Theories come and go, the frog stays.” (Incidentally, Jacob is the author of The Logic of Life, Princeton University Press, 1993). This brings us back to a question formulated above: Does it matter to a programmer (the reader may substitute his/her profession for the word programmer) that anticipation is based on the self-entailment characteristic of the living? Or that evolution is the source of entailment? If we compare the various types of computation acknowledged since people started building computers and writing software programs, we find that during the syntactically driven initial phases, such considerations actually could not affect the pragmatics of programming. Only relatively recently has a rudimentary semantic dimension been added to computation. In the final analysis, it does not matter which microelectronics, computer architecture, programming languages, operating systems, networks, or communication protocols are used. For all practical purposes, what matters is that between the world and the computation pertinent to some aspects of this world, the relations are still extremely limited. If a programmer is not just in the business of writing lines of code for a specific application that might improve through a syntactically supported emulation of anticipatory characteristics–think about macros that save typing time by “guessing” which word or expression a user started to type in and “filling in” the letters or words–then it matters that there is something like self-entailment. It matters, too, that the notion of self-entailment supports more adequate explanations of biological processes than any other concept of the physical sciences. On a semantic level, the awareness of self-entailment (through self-associative memory) leads to better solutions in speech and handwriting recognition. However, once the pragmatic level is reached–we are still far from this–understanding the philosophic implications of the nature and condition of anticipation becomes crucial. The reason is that it is not at all clear that characteristics of the living–self-repair, metabolism, and anticipation–can be effectively embodied in machines. This is why the notion of frontier science was mentioned in the Introduction. The frontier is that of conceiving and implementing life-like systems. Whether Rosen’s (M, R)-model, defined by metabolism and repair, or others, such as those advanced in neural networks, evolutionary computation, and ALife, will qualify as necessary and sufficient for making anticipation possible outside the realm of the living remains to be seen. I (Nadin, 1988, 1991) argue for computers with a variable configuration based on anticipatory procedures. This model is inspired by the dynamics of the constitution and interaction of minds, but does not suggest an imitation of such processes. The issue is not, however, reducible to the means (digital computation, algorithmic, non-algorithmic, or heterogenous processing, signal processing, quantum computation, etc.), but to the encompassing goal. 2.2 Specializations To nobody’s surprise, anticipation, in some form or another, is part of the research program of logic, cognitive science, computer science, robotics, networking, molecular biology, genetics, medicine, art and design, nanotechnology, the mathematics of dynamic systems, and what has become known as ALife, i.e., the field of inquiry into artificial life. Anticipation involves semiotic notions, as it involves a deep understanding of complexity, or, better yet, of an improved understanding of complexity that integrates quantitative and qualitative aspects. It is not at all clear that full-fledged anticipation, in the form of machine-supported anticipatory functioning, is a goal within the reach of the species through whose cognitive characteristics it came into being and who became aware of it. Machines, or computations, for those who focus on the various data processing machines, able to anticipate earthquakes, hurricanes, aesthetic satisfaction, disease, financial market performance, lottery drawings, military actions, scientific breakthroughs, social unrest, irrational human behavior, etc., could well claim total control of our universe of existence. Indeed, to correctly anticipate is to be in control. This rather simplistic image of machines or computations able to anticipate cannot be disregarded or relegated to science fiction. Cloning is here to stay; so are many techniques embodying the once disreputed causa finalis. A philosophic foundation of anticipation has to entertain the many questions and aspects that pertain to the basic assertion according to which anticipation reflects part of our cognitive make-up, moreover, constitutes its foundation. Even if Kuhn’s model of scientific paradigm change had not been abused to the extent of its trivialization, I would avoid the suggestion that anticipation is a new paradigm. Rather, as a frontier in science, it transcends its many specializations as it establishes the requirement for a different way of thinking, a fundamentally different epistemological foundation. 3 Pro-Action vs. Re-Action Now that the epistemological requirement of a different way of thinking has been brought up, I would like to revisit work done during the years when the very subject of anticipation seemed not to exist (except in the title of Rosen’s book). My claim in 1988 (on the occasion of a lecture presented at Ohio State University) was that anticipation lies at the foundation of the entire cognitive activity of the human being. Moreover, through anticipation, we humans gain insight into what keeps our world together as a coherent whole whose future states stand in correlation to the present state as minds grasp it. Minds exist only in relation to other minds; they are instantiations of co-relations. This is also the main thesis of this paper. For over 300 years–since Descartes’ major elaborations (1637, 1644) and Newton’s Principia (1687)–science has advanced in understanding what for all practical purposes came to be known as the reactive modality. Causality is experienced in the reactive model of the universe, to the detriment of any pro-active manifestations of phenomena not reducible to the cause-and-effect chain or describable in the vocabulary of determinism. It is important to understand that what is at issue here is not some silly semantic game, but rather a pragmatic horizon: Are human actions (through which individuals and groups identify themselves, i.e., self-constitute, Nadin 1997) in reaction to something assumed as given, or are human actions in anticipation of something that can be described as a goal, ideal, or value? But even in this formulation (in which the vocabulary is as far as it can be from the vitalistic notions to which Descartes, Newton, and many others reacted), the suspicion of teleological dynamics–is there a given goal or direction, a final vector?–is not erased. Despite progress made in the last 30 years in understanding dynamic systems, it is still difficult to accept the connection between goal and self-organization, between ideal, or value, and emergent properties. 3.1 Minds Are Anticipations The mind is in anticipation of events, that is, ahead of them–this was my main thesis over ten years ago. Advanced research (Libet 1985, 1989) on the so-called “readiness potential” supported this statement. In recent years, work on the “wet brain” as well as work supported by MR-based visualization technologies have fully confirmed this understanding. Having entered the difficult dialog on the nature of cognitive processes from a perspective that no longer accepted the exclusive premise of representation –another heritage from Descartes–I had to examine how processes of self-constitution eventually result in shared knowledge without the assumption of a homunculus. What seemed inexplicable from a perspective of classical or relativist physics–a vast amount of actions that seemed instantaneous, in the absence of a better explanation for their connectedness–was coming into focus as constitutive of the human mind. Anticipatory cognitive and motoric scripts, from which in a given context one or another is instantiated, were advanced at that time as a possible description for how, from among many pro-active possible courses of action, one would be realized. Today I would call those possible scripts models and insist that a coherent description of the functioning of the mind is based on the assumption that there are many such models. Additionally, I would add that learning, in its many realizations, is to be understood as an important form of stimulating the generation of models, and of stimulating a competitive relation among them. [Von Foerster (1999) entertains a motto on his e-mail address that is an encapsulation of what I just described: “Act always as to increase the number of choices.”] In a subtle way, defense mechanisms–from blinking to reflexes of all types–belong to this family. Anticipatory nausea and vomiting (whether on a ship or related to chemotherapy) is another example. The phantom limb phenomenon (sensation in the area of an amputated limb) is mirrored by pain or discomfort before something could have actually caused them. There is a descriptive instance in Lewis Carroll’s Through the Looking Glass. Before accidentally pricking her finger, the White Queen cries: “I haven’t pricked it yet, but I soon shall.” She lives life in reverse, which is what anticipation ultimately affords–provided that the interpretation process is triggered and made part of the self-constitutive pragmatics. 3.1.1 Anticipation is Distributed As recently as this year, results in the study of the anticipation of moving stimuli by the retina (Berry, et al 1999) made it clear that anticipation is distributed. The research proved that anticipation of moving stimuli begins in the retina. It is no longer that we expect the visual cortex to do some heavy extrapolation of trajectory (this was the predominant model until recently) but that we know that retinal processing is pro-active. Even if pro-activity is not equally distributed along all sensory channels–some are slower in anticipating than others, not the least because sound travels at a slower speed than light does, for example–it defines a characteristic of human perception and sheds new light on motoric activity. 3.1.2 Knowledge as Construction But there is also Kelly’s (1995) constructivist position, which must be acknowledged by researchers in the psychological foundation of anticipation. The adequacy of our constructs is, in his view, their predictive utility. Coherence is gained as we improve our capacity to anticipate events. Knowledge is constructed; validated anticipations enhance cognitive confidence and make further constructs possible. In Kelly’s terms, human anticipation originates in the psychological realm (the mind) and reflects the intention to make possible a correspondence between a future experience and certain of our anticipations (Kelly, 1955; Mancuso & Adams-Weber, 1982). Since states of mind somehow represent states of the world, adequacy of anticipations remains a matter of the test of experience. The basic function of all our representations, as the “fundamental postulate” ascertains, is anticipation (a temporal projection). Alternative courses of action in respect to their anticipated consequences represent the pragmatic dimension of this view. Observed phenomena and their descriptions are not independent of the assumptions we make. This applies to the perceptual control theory, as it applies to Kelly’s perspective and to any other theory. Moreover, assumptions facilitate or hinder new observations. For those who adopted the view according to which a future state cannot affect a present state, anticipation makes no sense, regardless of whether one points to the subject in various religious schemes, in biology, or in the quantum realm. The situation is not unlike that of Euclidean geometry vs. non-Euclidean geometries. To see the world anew is not an easy task! Anticipation of moving stimuli, to get back to the discovery mentioned above, is recorded in the form of spike trains of many ganglion cells in the retina. It follows from known mechanisms of retinal processing; in particular, the contrast-gain control mechanism suggests that there will be limits to what kinds of stimuli can be anticipated. Researchers report that variations of speed, for instance, are important; variations of direction are not. Furthermore, since space-based anticipation and time-based anticipation have a different metric, it remains to be seen whether a dominance of one mode over the other is established. As we know, in many cases the meeting between a visual map (projection of the retina to the tectum) and an auditory map takes place in a process called binding. How the two maps are eventually aligned is far from being a matter of semantics (or terminology, if you wish). Synchronization mechanisms, of a nature we cannot yet define, play an important role here. Obviously, this is not control of imagination, even if those pushing such terms feel more forceful in the de facto rejection of anticipation. Arguing from a formal system to existence is quite different from the reverse argumentation (from existence to formalism). Arguing from computation can take place only within the confines of this particular experience: the more constrained a mechanism, the more programmable it is (as Rosen pointed out, 1991, p. 238). Albeit, reaction is indeed programmable, even if at times it is not a trivial task. Pro-active characteristics make for quite a different task. The most impressive success stories so far are in the area of modeling and simulation. To give only one example: Chances are that your laptop (or any other device you use) will one day fall. The future state–stress, strain, depending upon the height, angle, weight, material, etc.–and the current state are in a relation that most frequently does not interest the user of such a portable device. It used to be that physical models were built and subjected to tests (this applies, for instance, to cars as well as to photo cameras). We can model, and thus to a certain point anticipate, the effects of various possible crashes through simulations based on finite-element analysis. That anticipation itself, in its full meaning, is different in nature from such simulations passes without too much comment. The kind of model we need in order to generate anticipations is a question to which we shall return. 3.2 A Rapidly Expanding Area of Inquiry An exhaustive analysis of the database of the contributions to fundamental and applied research of anticipation reveals that this covers a wide area of inquiry. In many cases, those involved are not even aware of the anticipatory theme. They see the trees, but not yet the forest. More telling is the fact that the major current directions of scientific research allow for, or even require, an anticipatory angle. The simulation mentioned above does not anticipate the fall of the laptop; rather, it visualizes–conveniently for the benefit of designers, engineers, production managers, etc.–what could happen if this possibility were realized. From this possibilistic viewpoint, we infer to necessary characteristics of the product, corresponding to its use (how much force can be exercised on the keyboard, screen, mouse, etc.?) or to its accidental fall. That is, we design in anticipation of such possibilities. Or we should! I would like to mention other examples, without the claim of even being close to a complete list. 3.2.1 An Example from Genetics But more than Rosen, whose work belongs rather to the meta-level, it was genetics that recovered the terminology of heredity. Having done so, it established a framework of implicit anticipations grounded in the genetic program. Of exceptional importance are the resulting medical alternatives to the “fix-it” syndrome of healthcare practiced as a “car repair” (including the new obsession with spare parts and artificial surrogates). Genetic medicine, as slow in coming as it is, is fundamentally geared towards the active recognition of anticipatory traits, instead of pursuing the reactive model based on physical determinism. Although there is not yet a remedy to Huntington’s disease, myotonic dystrophy, schizophrenia, Alzheimer’s disease, or Parkinson’s disease, medical researchers are making progress in the direction of better understanding how the future (the eventual state of diagnosed disease) co-relates to a present state (the unfolding of the individual in time). In the language of medicine, anticipation describes the tendency of such hereditary diseases to become symptomatic at a younger age, and sometimes to become more severe with each new generation. We now have two parallel paths of anticipation: one is that of the disorder itself, i.e., the observed object; the other, that of observation. The elaborations within second-order cybernetics (von Foerster, 1976) on the relation between these paths (the classical subject-object problem) make any further comment superfluous. The convergence of the two paths, in what became known as eigen behavior (or eigen value), is of interest to those actively seeking to transcend the identification of genetic defects through the genetic design of a cure. After all, a cure can be conceived as a repair mechanism, related to the process of anticipation. 3.2.2 Art, Simulacrum, Fabrication That art (healing was also seen as a special type of art not so long ago), in all its manifestations, including the arts of writing (poetry, fiction, drama), theatrical performance, and design–driven by purpose (telos) and in anticipation of what it makes possible–incorporates anticipatory features might be accepted as a metaphor. But once one becomes familiar with what it means to draw, paint, compose, design, write, sing, or perform (with or without devices), anticipation can be seen as the act through which the future (of the work) defines the current condition of the individual in the process of his or her self-constitution as an artist. What is interesting in both medicine and art is that the imitation can result only in a category of artifacts to be called simulacrum. In other words, the mimesis approach (for example, biomimesis as an attempt to produce organisms, i.e., replicate life from the inanimate; aesthetic mimesis, replicating art by starting with a mechanism such as the one embodied in a computer program) remains a simulacrum. Between simulacra and what was intended (organisms, and, respectively, art) there remains the distance between the authentic and the imitation, human art and machine art. They are, nevertheless, justified in more than one aspect: They can be used for many applications, and they deserve to be valued as products of high competence and extreme performance. But no one could or should ignore that the pragmatics of fabrication, characteristic of machines, and the pragmatics of human self-constitution within a dynamic involving anticipation are fundamentally different. 3.2.3 Learning (Human and Machined-Based) Learning–to mention yet another example–is by its nature an anticipatory activity: The future associates with learning expectations and a sui generis reward mechanism. These are very often disassociated from the context in which learning takes place. That this is fundamentally different from generating predictive models and stimulating competition among them might not be totally clear to the proponents of the so-called computational learning theory (COLT), or to a number of researchers of learning–all from reputable fields of scientific inquiry but captive to the action-reaction model dominant in education. It is probably only fair to remark in this vein that teaching and learning experiences within the machine-based model of current education are not different from those mimicked in some computational form. Computer-based training, a very limited experience focused on a well defined body of information, can provide a cost-efficient alternative to a variety of training programs. What it cannot do is to stimulate and trigger anticipatory characteristics because, by design, it is not supposed to override the action-reaction cycle. 3.2.4 Reward Alternatively, one can see promise in the formalism of neural networks. For instance, anticipation of reward or punishment was observed in functional neuroanatomy research (cf. Knutson, 1998). Activation of circuitry (to use the current descriptive language of brain activity) running from the medial dorsal thalamus through the anterior cingulate and mesial prefrontal cortex was co-related not to motor response but to personality variations. Accordingly, it is quite tempting to look at such mechanisms and to try to introduce reward anticipation in neural networks procedures as a method of increasing the performance of artificially mimicked decision-making. Homan (1997) reports on neural networks that “can anticipate rewards before they occur, and use these expectations to make decisions.” The focus of this type of research is to emulate biological processes, in particular the dopamine-based rewarding mechanism that lies behind a variety of goal-oriented mechanisms. Dynamic programming supports a similar objective. It focuses on states; their dynamic reassessment is propagated through the neural network in ways considered similar to those mapped in the successful enlisting of brain capabilities. Training, as a form of conditioning based on anticipation, is probably complementary to what one would call instinct-based (or natural) action. 3.2.5 Motion Planning Animation and robot motion planning, as distant from each other as they appear to some of us, share the goal of providing path planning, that is, to find a collision-free path between an initial position (the robot’s arm or the arm of an animated character) and a goal position. It is clear that the future state influences the current state and that those planning the motion actually coordinate the relation between the two states. In predictive programs, anticipation is pursued as an evaluation procedure among many possibilities, as in economics or in the social sciences. The focus changes from movement (and planning) to dynamics and probability. A large number of applications, such as pro-active error detection in networks, hard-disk arm movement in anticipation of future requests, traffic control, strategic games (including military confrontation), and risk management prompted interest in the many varieties under which anticipatory characteristics can be identified. 3.3 Aspects of Anticipation At this point, where understanding the difference between anticipation as a natural entailment process and embodying anticipatory features in machine-like artifacts meet, it is quite useful to mention that expectation, prediction, and planning–to which others add forecasting and guessing–are not fully equivalent to anticipation, but aspects of it. Let us also make note of the fact that we are not pursuing distinctions on the semantic level, but on the pragmatic–the only level at which it makes sense to approach the subject. 3.3.1 Expectation, Prediction, Forecast The practical experience through which humans constitute themselves in expectation of something–rain (when atmospheric conditions are conducive), meeting someone, closing a transaction, etc.–has to be understood as a process of unfolding possibilities, not as an active search within a field of potential events. Expectation involves waiting; it is a rather passive state, too, experienced in connection with something at least probable. Predictions are practical experiences of inferences (weak or strong, arbitrary or motivated, clear-cut or fuzzy, explicit or implicit, etc.) along the physical timeline from past to the future. Checking the barometer and noticing pain in an arthritic knee are very different experiences; so are the outcomes: imperative prediction or tentative, ambiguous foretelling. To predict is to connect what is of the nature of a datum (information received as cues, indices, causal identifiers, and the like) experienced once or more frequently, and the unfolding of a similar experience, assumed to lead to a related result. It should be noted here that the deterministic perspective implies that causality affords us predictive power. Based on the deterministic model, many predictive endeavors of impressive performance are succesfully carried out (in the form of astronomical tables, geomagnetic data, and calculations on which the entire space program relies). Under certain circumstances (such as devising economic policies, participating in financial markets, or mining data for political purposes), predictions can form a pragmatic context that embodies the prediction. In other words, a self-referential loop is put in place. Not fundamentally different are forecasts, although the etymology points to a different pragmatics, i.e., one that involves randomness. What pragmatically distinguishes these from predictions is the focus on specific future events (weather forecasting is the best known pragmatic example, that is, the self-constitution of the forecaster through an analytic activity of data acquisition, processing, and interpretation, whose output takes very precise forms corresponding to the intended communication process). These events are subject to a dynamics for which the immediate deterministic descriptions no longer suffice. Whether economic, meteorological, geophysical (regarding earthquakes, in particular), such forecasts are subject to an interplay of initial conditions, internal and external dynamics, linearity, and nonlinearity (to name only a few factors) that is still beyond our capacity to grasp, moreover to express in some efficient computational form. Although forecasts involve a predictive dimension, the two differ in scope and in the specific method. A computer program for predicting weather could process historic data (weather patterns over a long period of time). Its purpose is global prediction (for a season, a year, a decade, etc.). A forecasting algorithm, if at all possible, would be rather local and specific: Tomorrow at 11:30 am. Dynamic systems theory tells us how much more difficult forecasting is in comparison with prediction. Our expectations, predictions, and forecasts co-constitute our pragmatics. That is, they participate in making the world of our actions. There is formative power in each of them. Although expecting, predicting, and forecasting good weather will not bring the sun out, they can lead to better chances for a political candidate in an election. Indeed, we need to distinguish between categories of events to which these forms of anticipation apply. Some are beyond our current efforts to shape events and will probably remain so; others belong to the realm of human interaction. Recursion would easily describe the self-referential nature of some particular anticipations: expected outcome = f(expectation). That such cases basically belong to the category of indeterminate problems is more suspected than acknowledged. Mutually reinforcing expectations, predictions, and forecasts are the result of more than one hypothesis and their comparative (not necessarily explicit) evaluation. This model can be relatively efficiently implemented in genetic computations. 3.3.2 Plans, Design, Management Plans are the expression of well or less well defined goals associated with means necessary and sufficient to achieve them. They are conceived in a practical experience taking place under the expectation of reaching an acceptable, optimal, or high ratio between effort and result. Planning is an active pursuit within which expectations are encoded, predictions are made, and forecasts of all kind (e.g., price of raw materials and energy sources, weather conditions, individual and collective patterns of behavior, etc.) are considered. Design and architecture as pragmatic endeavors with clearly defined goals (i.e., to conceive of everything that qualifies as shelter and supports life and work in a “sheltered” society: housing, workplace, various institutions, leisure, etc.) are particular practical experiences that involve planning, but extend well beyond it, at least in the anticipatory aesthetic dimension. Every design is the expression of a possible future state–a new chip, a communication protocol, clothing, books, transportation means, medicine, political systems or events, erotic stimuli, meals–that affects the current state–of individuals, groups, society, etc.–through constitution of perceived and acknowledged needs, expectations, and desires. The dynamics of change embodied in design anticipations is normally higher than that of all other known human practical experiences. Policy, management, and prevention (to name a few additional aspects or dimensions of anticipation) involve giving advance thought, looking forward, directing towards something that as a goal influences our actions in reaching it. All these characteristics are part of the dictionary definitions of anticipation. The various words (such as those just referred to) involved in the scientific discourse on anticipation, i.e., its various meanings, pertain to its many aspects; but they are not equivalent. 3.4 Resilience It is probably useful to interrupt this account of the many ways through which anticipation penetrates the scientific agenda and to invoke a distinction that, in the beginning, defies our acquired understanding of anticipation, at least along the distinctions made above. In a deceptively light presentation, Postrel (1997) suggests a counterdistinction: resilience vs. anticipation. If the subject were only what distinguishes Silicon Valley from the Boston area, both known as regions of technical innovation and fast economic growth, the two elements invoked–predictable weather patterns, and earthquakes, anything but predictable–we would not have to bother. However, her article presents the political theory of a proficient political scholar, Wildawski (1988), focused on meeting the challenge of risk through anticipation, understood as planning that aspires to perfect foresight, or through resilience, a dynamic response based on providing adjustments. The definitions are quite telling: “Anticipation is a mode of control by a central mind; efforts are made to predict and prevent potential dangers before damage is done. . . . Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back.” Not surprising is the inference that “anticipation seeks to preserve stability: the less fluctuation, the better. Resilience accommodates variability. . . .” We seem to have here a reverse view of all that has been presented so far: Anticipation means to see the world as predictable. But it also qualifies anticipation as being quite inappropriate within dynamic systems, that is, exactly where anticipation makes a difference! Rapid changes, especially unexpected turns of events, seem the congenial weakness of anticipation in this model. (Those critical of the evolution theory refer to punctuated equilibrium, i.e., fast change for which evolution theory has yet to produce a convincing account.) Hubristic central planning and over-caution can undermine anticipation. This view of anticipation would also imply that it cannot be properly pursued within open systems or within transitory processes–again, where we could most benefit from it. Resilience depends on spontaneity, serendipity, on the unforeseeable. Wildavsky expressed this in rather sweeping statements: “. . . not only markets rely on spontaneity; science and democracy do as well. . . .” Computations of risk are, of course, also part of the subject of anticipation. 3.5 Synchronization Yet another element of this methodological overview (far from being complete) is synchronization. It can serve here as a terminological cue, or, to recall Rosen (1991), co-temporality or simultaneity would do. In the canonical description of anticipation–the current state of the system is defined by a future state–one aspect of time, sequentiality or precedence (one instant precedes the other) takes over. Yet in the universe of simultaneous events, we encounter anticipation, not only as it refers to space aspects, but as it takes the form of synchronization mechanisms. Whether in genetic mechanisms, in musical perceptions (where temporality is definitory), or in the perception of the world (I have already mentioned above the way in which the visual and the auditory “map” are brought in sync, the so-called binding problem, i.e., integration of sensory information arriving on different channels), to name just a few, the coordination mechanism is the final guarantor of the system’s coherent functioning. As a synchronization mechanism, anticipation means to “know” (the quotation marks are used to identify a way of speaking) when relatively unrelated, or even related, events have to be integrated in order to make sense. It is therefore helpful to consider this particular kind of anticipation as the result of the work of a “conductor” (or switch, for those technically inclined) eliciting the various sound streams originating from independent sources, each operating within its own confines, to merge in a synchronized concert. Cognitively, this means to ensure that what is synchronous in the world is ultimately perceived as such, although information arrives asynchronously in the brain. Synchronization, as opposed to precedence, is not tolerant of error. Precedence is less restrictive: The cold temperatures that might affect the viability (survival) of a deciduous tree, and the cycle of days and night affected by the cycle of seasons allow for a range. This is why leaves fall over a relatively long time, depending upon tree kinds and configurations (lone trees, groves, forests, etc.). So we learn that not only is there a variety of soft-defined forms of anticipation (weather prediction, even after data collection, processing, and interpretation have made spectacular advances, is as soft as soft gets), but also that there are high precision mechanisms that deserve to be accounted for if we expect to understand, and moreover make use of, anticipatory technologies. 3.6 Some Working Hypotheses 3.6.1 Rosen’s Model Rosen distinguishes the difference between the dynamics of the coupled given object system S and the model M; that is, the difference between real time in S and the modeling time of M (faster than that of S) is indicative of anticipation. True, time in this particular description ceases to be an objective dimension of the world, since we can produce quite a variety of related and unrelated time sequences. He also remarks that the requirement of M to be a perfect model is almost never fulfilled. Therefore, the behavior of such a coupled system can only be qualified as quasi-anticipatory (in which E represents effectors through which action is triggered by M within S); cf. Fig. 1. Fig. 1 Rosen’s model As aspects of this functioning, Rosen names, rather ambiguously, planning, management, and policies. Essential here are the parametrization of M and S and the choice of the model. The standard definition, quoted again and again, is that an anticipatory system “contains a predictive model of itself and/or of its environment, which allows it to change state at an instant in accord with the model’s predictions pertaining to a later instant” (Rosen 1985, p. 339). The definition is not only contradictory–as Dubois (1997) noticed–but also circular–anticipation as a result of a weaker form of anticipation (prediction) exercised through a model. Much more interesting are Rosen’s examples: “If I am walking in the woods and I see a bear appear on the path ahead of me, I will immediately tend to vacate the premises”; the “wired-in” winterizing behavior of deciduous trees; the biosynthetic pathway with a forward activation. Each sheds light on the distinction between processes that seem vaguely correlated: background information (what could happen if the encounter with the bear took place, based on what has already happened to others); the cycle of day and night and the related pattern of lower temperatures as days get shorter with the onset of autumn; the pathway for the forward activation and the viability of the cell itself. What is not at all clear is how less than obvious weak correlations end up as powerful anticipation links: heading away from the bear (“I change my present course of action, in accordance with my model’s prediction,” 1985, p. 7) usually eliminates the danger; loss of leaves saves the tree from freezing; forward activation, as an adaptive process, increases the viability of the cell. We have a “temporal spanning,” as Rosen calls it. In his example of senescence (“an almost ubiquitous property of organisms,” “a generalized maladaptation without any localizable failure in specific subsystems,” 1985, p. 402), it becomes even more clear that the time factor is of essence in the biological realm. 3.6.2 Inclusive Recursion (the Dubois Path) Dubois (1997, p. 4) is correct in pointing out that this approach is reminiscent of classical control theory. He submits a formal language of inclusive (or implicit) recursion, more precisely, of self-referential systems, in which the value of a variable at a later time (t+1) explicitly contains a predictive model of itself (p. 6): x(t+1) = f[x(t), x(t+1), p), p] (1a) In this expression, x is the state variable of the system, t stands for time (present, t–1 is the past, t+1 is the future), and p is a control parameter. Dubois starts from recursion within dynamical discrete systems, where the future state of a system depends exclusively on its present and past x(t+1) = f[… x(t–1), x(t), x(t+1), p] (1b) He further defines incursion, i.e., an inclusive or implicit recursion, as x(t+1) = f[… x(t–2), x(t–1), x(t), x(t+1), …, p] (2) and exemplifies its simplest case as a self-referential system (cf. 1a and 1b). The embedded nature of such a system (it contains a model of itself) explains some of its characteristics, in particular the fact that it is purpose (i.e., finality, or telos) driven. Having provided a mathematical description, Dubois further reasons from the formalism submitted to the mechanism of anticipation: The dynamic of the system is represented by D S/D t = [S(t+D t) – S(t)]/ D t = F[S(t), M(t+D t)] (3) That of the predictive model is: D M/D t = [M(t+D t) – M(t) = G[M(t)] (4) In order to avoid the contradiction in Rosen’s model, Dubois suggests that D M/D t = [M(t+D t) – M(t)]/ D t = F[S(t), M(t+D t)] (5) Obviously, what he ascertains is that there is no difference between the system S and the anticipatory model, the result being D S/D t = [S(t+D t) – S(t)]/ D t = F[S(t), S(t+D t)] (6) which is, according to his definition, an incursive system. That Rosen and Dubois take very different positions is clear. In Rosen’s view, since the “heart of recursion is the conversion of the present to the future” (1991, p. 78), and anticipation is an arrow pointing in the opposite direction, recursions could not capture the nature of anticipatory processes. Dubois, in producing a different type of recursion, in which the future affects the dynamics, partially contradicts Rosen’s view. Incursion (inclusive or implicit recursion) and hyperincursion (an incursion with multiple solutions) describe a particular kind of predictive behavior, according to Dubois. Building upon McCulloch and Pitts (1943) formal neuron and taking von Neumann’s suggestion that a hybrid digital-analog neuron configuration could explain brain dynamics, Dubois (1990, 1992) submitted a fractal model of neural systems and furthered a non-linear threshold logic (with Resconi, 1993). The incursive map x(t) = 1 – abs(1–2x(t+1)) (7) where “abs” means “the absolute value” and in which the iterated x(t) is a function of its iterate at a future time t+1, can subsequently be transformed into a hyper-recursive map: 1 – 2x(t+1) = ± (1–x(t)) (8) so that x(t+1) = [1 ± x(t)–1]/2 (9) It is clear that once an initial condition x(0) is defined, successive iterated values x(t+1), for t=0,1,2,…T, produce two iterations corresponding to the ± sign. In order to avoid the increase of the number of iterated values, i.e., in order to define a single trajectory, a control function u(T–k) is introduced. The resulting hyperincursive process is expressed through x(t+1) = [1 + (1–2u(t+1))(x(t)–1]/2 = x(t)/2 + u(t+1) – x(t) · u(t+1)(10) It turns out that this equation describes the von Neumann hybrid version through the x(t) as a floating point variable and the control function u(t) as a digital variable, accepting 0 and 1 as values, so that the sign + or – result from Sg = 2u(t) – 1, for t=1, 2,…T (11) It is tempting to see this hybrid neuron as a building block of a functional entity endowed with anticipatory properties. Let me add here that Dubois has continued his work in the direction of producing formal descriptions for neural net applications, memory research, and brain modeling (1998). His work is convincing, but, again, it takes a different direction from the work pursued by Rosen, if we correctly understand Rosen’s warning (1991) concerning the non-fractionability of the (M, R)-system, i.e., its intrinsic relational character. Nevertheless, Dubois’ results will be seen by many as another suggestion that the hybrid analog/digital computation better reflects the complexity of the living and thus might support effective information processing for applications in which the living is not reduced to the physical. 3.6.3 Space-Based Computation Cellular automata, as discrete space-time models, constitute yet another way of modeling anticipation as a space-based computation. More details can be found in the work of Holmberg (1997), who introduces the concept of spatial automata and correctly positions this approach, as well as some basic considerations on the nature of anticipation in technological applications, within systems theory. Not surprisingly, the community of researchers of anticipation is generating further working hypotheses (Julià 1998; Sommer, 1998, addressing intentionality and learnability, respectively). It is very difficult to keep a record of all of these contributions, and even more difficult to comment on works in their incipient phase. Applications of fundamental theoretical anticipatory models are also being submitted in increasing numbers. Dubois himself suggested quite a number of applications, including robotics and neural machines. My focus is on variable configuration computers (regardless of the nature of computation). Obviously, those and similar attempts (many in the program of the CASYS conferences) are quite different from training in various sports, sports performance (think about anticipation in fencing!), political action, the functioning of the judicial system, the dissemination of writing rules for achieving suspense, the automatic generation of jokes (Barker, 1996), the building of economic models, and so on. 3.6.4 Dynamic Competing Models Without attempting to submit a full-fledged alternative to either Rosen’s or Dubois’ anticipation descriptions, I will only mention once more that my own work speaks in favor of a changing set of models and of a procedure for maintaining competition among them. Fig. 2 Changing models and competition among models Since a diagram is a formalism of sorts, not unlike a mathematical or logical expression, I also reason from it to the dynamics of the system. The diagram ascertains that anticipation implies awareness, and thus processes of interpretation—hence semiotic processes. Mathematical or logical descriptions do not explicitly address awareness, but rather build upon it as a given. Some scientists subsequently commit the error of assuming that because awareness is not explicitly encoded in the formulae, it plays no role whatsoever in the system described. As we shall see in the discussion of the non-local nature of anticipation, quantum experiments suggest that in the absence of the observer, our descriptions of the universe make no sense. 3.6.5 Variability and Computation To make things even more challenging, there are instances in which anticipation, resulting from the dynamics of natural evolution, is subject to variability, i.e., change. In every game situation, anticipations are at work in a competitive environment. Chess players, not unlike “black-box” traders on the financial or stock markets, as well as professional gamblers, could provide a huge amount of testimony regarding “anticipation as a moving target.” In my model of an anticipation mechanism based on a changing number of models and on stimulating competition among them, games can serve as a source of information in the validation process. The mathematics of game theory, not unlike the mathematics of ALife formal descriptions applied to trading mechanisms or to flocking behavior, is in many respects pertinent to questions of anticipation. What is not explicitly provided through the ever expanding list of application examples is the broad perspective. Indeed, when the performing musician of a well known musical score seeks an expression that deviates from the expected sound (without being unfaithful to the composer), we have anticipation at work: not necessarily as a result of an understanding of its many implications, rather as a spontaneously developed means of expression. Many similar anticipation-based characteristics are recognizable in the practical human experience of self-constitution in competitive situations, in survival instances (some action performed ahead of the destructive instant), in the interpretation of various types of symptoms. After all, the immune system is one of the most impressive examples of the (M,R) models that Rosen describes. It is in anticipation of an infinity of potential possible factors that affect the organism during its unfolding from inception to death. The metabolism component and the repair component, although different, are themselves co-related. From the perspective opened by the subject of anticipation, it is implausible that a cure for a deficient immune system will be found in any place other than its repair function. In contradistinction, as we shall see, when one searches for information on the World-Wide Web, there is anticipation involved in the mechanism of pre-fetching information that eventually gives the user the feeling of interactivity, even though what technology makes possible is a simulacrum. The question to be asked, but not necessarily answered in this paper, is: To what extent does becoming aware of anticipation, or living in a particular anticipation (of a concert, of a joke, or of an inherited disease), affect our practical experiences of self-constitution, regardless of whether we build a technology inspired by it or only use the technology, or to what extent are such experiences part of the technology? Friedrich Dü rrenmatt, the Swiss writer, once remarked (1962, in a play entitled The Physician Sits), “A machine only becomes useful when it has grown independent of the knowledge that led to its discovery.” This statement will follow us as we get closer to the association between anticipation and computation. It suggests that if we are able to endow machines with anticipatory characteristics (prediction, expectancy, planning, etc.), chances are that our relation to such machines will eventually become more natural. This might change our relation to anticipation altogether, either by further honing natural anticipation capabilities or by effecting their extinction. The broader picture that results from the examination of what actually defines the field of inquiry identifiable as anticipation–in living systems and in machines–is at best contradictory. To be candid, it is also disconcerting, especially in view of the many so-called anticipation-based claims. But this should not be a discouraging factor. Rather, it should make the need for foundational work even more obvious. One or two books, many disparate articles in various journals, plus the Proceedings of the Computing Anticipatory Systems (CASYS) conferences do not yet constitute a sufficient grounding. It is with this understanding in mind that I have undertaken this preliminary overview (which will eventually become my second book on the subject of anticipation). Since the time my book (1991) was published, and even more after its posting on the World-Wide Web, I have faced colleagues who were rather confused. They wanted to know what, in my opinion, anticipation is; but they were not willing to commit themselves to the subject. It impressed them; but it also made them feel uneasy because the solid foundation of determinism, upon which their reputations were built, and from which they operate, seemed to be put in question. In addition, funding agencies have trouble locating anticipation in their cubbyholes, and even more in providing peer reviews from people willing to jump over their shadow and entertain the idea that their own views, deeply rooted in the paradigm of physics and machines, deserve to be challenged. My research at Stanford University–which constituted the basis for this report–provided a stimulating academic environment, but not many possible research partners. Students in my classes turned out to be far more receptive to the idea of anticipation than my colleagues. The summary given in this section stands as a testimony to progress, but no more than that, unless it is integrated in the articulation of research hypotheses and models for future development. 4 Minds, Knowledge, Computation–a Borgesian Horizon The anticipatory nature of the mind–and by this I mean the processes of mind constitution as well as mind interaction–together with the understanding of anticipation as a distributed characteristic of the human being, represents an epistemological and cognitive premise. Let us put these ascertainments in the broader perspective of knowledge–the ultimate goal of our inquiry (knowledge at work included, of course). Niels Bohr (1934), well ahead of the illustrious founders of second-order cybernetics or of today’s constructivist model of science, risked a rather scandalous sentence: “It is wrong to think that the task of physics is to find out how nature is.” He went on to claim that “Physics concerns what we can say about nature.” In this vein, we can say that Rosen and others have proven that anticipation is a characteristic of natural processes. We can also take this description and try to make it the blueprint of various applications (some of which were reported above). 4.1 Computation and Prolepsis Computation is the dominant aspect of the Weltanschauung today. It is not only a representation, but also the mechanism for processing representations (for which reason I call the computer a semiotic engine). The attempt to reduce everything there is to computation is not new. Science might be rigorous, but it is also inherently opportunistic. That is, those constituting themselves as scientists, (i.e., defining themselves in pragmatic endeavors labeled as science) are human beings living in the reality of a generic conflict between goals and means. Having said this, well aware that Feyerabend (1975) et al articulated this thought even more obliquely, I have to add that anticipation as computation is, from an epistemological perspective, probably more appropriate to our understanding of the concept than what various pre-computation disciplines had to say or to speculate about anticipation. Between Epicurus’ (cf. 1933) term prolepsis–rule, or standard of judgment (the second criterion for truth)–and the variety of analytical interpretations leading to the current infatuation with anticipation, there is a succession of epistemological viewpoints. It is not that background knowledge–”the idea of an object previously acquired through sensations” to which Epicurus referred as a necessary condition for understanding–changed its condition from a criterion of truth to a computational entity. After all, computer systems used in speech recognition or in vision involve a proleptic component. (The machine is trained to recognize something identified as such.) Rather, the pragmatic framework changed, and accordingly we constitute ourselves as researchers of the world in which we live by means of computation rather than by means used in Epicurus’ physics and corresponding theory of knowledge (the canon, as it is known). What I want to say is that computation and the subsequent attempt to see anticipation as computation are but another description of the world and, particularly in the latter case, of our attempts to form an effective body of knowledge about it. In his discussion of prolepsis, in Critique of Pure Reason, Kant (1781) saw it within his description of the world, that is, in the form of “something that can be known a priori.” In Kant’s view, only the “property of possessing a degree” is subject to anticipation. Indeed, in computation we can attach certain weights to various data before the data are actually input. These weights will affect the result and, in many cases, the art; that is, the appropriateness of specifying weights influences predictions and forecasts. But no one would infer à rebours that Kant saw the world as a computation, or that knowledge was the result of a computational process. 4.2 Evolutionary Computation The substratum of basic principles on which a theory of anticipation relies (Epicurus, Kant, Rosen, etc.) affects the theory itself, and thus its possible technological implementations. It has not actually been convincingly demonstrated that we can compute anticipation. What has been accomplished, again and again, is the embodiment of anticipatory characteristics, such as prediction, expectation, management, planning, etc., in computer programs. What has also been carried out is the implementation of control mechanisms, and, bringing us closer to our subject, the modeling of selection mechanisms in the now well known genetic computing models inspired by the guiding Darwinian concept. Evolutionary computation might well end up displaying anticipatory characteristics if we take the time and the knowledge needed to apply ourselves to the task. It will not be a spontaneous birth, rather a designed and carefully executed computation. Entailment might prove the critical element, as Rosen’s work seems to indicate. 4.2.1 Co-Relation vs. Computation Once a modeling relation is established between a natural system and a formal one, we can start inferring from the formal system to the natural. Let me mention that here we are in the territory of views that often contradict each other. (For instance, Daniel Dubois and myself are still in dialog over some of the examples to follow.) Neural networks or models of ALife, such as the simulation of collections of concurrently interacting agents, qualify as candidates for such an exercise. However, almost no effort has been made to elucidate the functioning of the causal arrow from the future to the present. In winter, temperatures will fall below the freezing point; leaves fall from deciduous trees in anticipation, but the trigger comes from a different process, i.e., the diminishing length of daylight, which stands in no direct causal relation to the phenomenon mentioned yet again. This is a co-relation of processes, not a computation, or at least not a Turing machine-based computation. The migration of birds is another example; yet others are the immune system, the sleep mechanism, the blinking mechanism, and the behavior of Pfiesteria (the single-cell microorganisms that produce deadly toxins in anticipation of the fish they will eventually kill). But if we want to stick to computation, which is a description different from the one pursued until now, we land in a domain of parallel processes, not very sophisticated, probably even less sophisticated than the level of a UNIX operating system, but of a much higher order of magnitude. We are in what was described as a big numbers-based reality. If we could control the process “shorter days,” we could eventually graph the inter-relation among the various components at work leading to the shedding of leaves during autumn, or to the sophisticated patterns of behavior of birds preparing for migration. 4.3 Large Numbers and Simple Processes In respect to brain activity, things are definitely more complicated, but they also fall in the realm of incredibly large numbers applying to rather simple entities and processes. The ongoing CAM-Brain Project (Hugo de Garis, 1994) is supposed to result in an artificial brain of one billion neurons (compare this to the 100 to 120 billion neurons of a wet brain implemented) on Field Programmable Gate Arrays. These digital circuits can be reconfigured as the tasks at hand might require. The notion of reconfiguration elicits our understanding of anticipation. Still, it remains to be seen whether the artificial brain will actually drive a robot or only simulate the robot’s functioning, as it also remains to be seen whether evolutionary patterns will support vision, hearing, their binding, coordinated movements, and, farther down the line, decision-making. The mind in anticipation of events (as I defined mind) is a lead. If we could parametrize the cognitive process and control the various channels, we could in principle learn more about how neuroactivity precedes moving one’s hand by 800 milliseconds, and what the consequences of this forecast for human anticipation abilities are. These are all possible experiments, after each of which we will end up not only with more data (the blessing and curse of our age!), but also necessarily with the desire to gain a better understanding of what these data mean. If Rosen’s hypothesis that anticipation is what distinguishes the biological realm (life) from the physical world, it remains to be seen whether we can do more than to compute only particular aspects of it–prediction, expectation, planning, etc.–outside the living. Pseudo-anticipation is already part of our practical experience: satellite launches, virtual surgery, pre-fetching data in order to optimize networks are but three examples of effective pseudo-anticipation. If we could create life, we could study how anticipation emerges as one of its irreducible, or only as one of its specific, properties. Short of this, ALife is involved in the simulation of lifelike processes. Rosen, in defining complexity as not simulatable, comes close to Feynman’s (1982) hope that one can best study physics by actually conducting the calculations of the world of physics on the physical entities to be studied. One can call this epistemological horizon Borgesian, knowing that an ideal Borgesian map was none other than the territory mapped. At this point, we need to arrive at a deeper understanding of what we want to do. Regardless of the metaphor, the epistemological foundation does not change. The knowing subject is already shaped by the implicit anticipatory dimension of mind interaction; in other words, the answer to the question meant to increase our knowledge is anticipated. Computation is as adequate a metaphor as we can have today, provided that we do not expect the metaphor to automatically generate the answers to our many questions. Regardless, the question concerning anticipation in the living and in the non-living is far from being settled, even after we might agree on a computational model or expand to something else, such as co-relation, which could either transcend computation or expand it beyond Turing’s universal machine. 5 Revisiting Non-Locality I took it upon myself to approach these matters well aware that I am advancing in mined territory. Comparisons notwithstanding, such was the situation faced by the proponents of quantum theory. To nobody’s surprise, Einstein took quantum mechanics, as developed by Heisenberg, Schrödinger, Dirac, et al, under scrutiny, and, well before the theory was even really established, raised objections to it, as well as to Bohr’s interpretation. From these objections (the complete list is known as the EPR Paper, 1935, for Einstein, Podolski, and Rosen), one in particular seems connected to the subject of anticipation. Einstein had a major problem with the property of non-locality–the correlations among separated parts of a quantum system across space and time. He defined such correlations as “spooky actions at distance” (“spukhafte Fernwirkungen”), remarking that they have to take place at speeds faster than that of light in order to make various parts of the quantum system match. In simple terms, this spooky action at distance refers to the links that can develop between two or more photons, electrons, or atoms, even if they are remotely placed in the world. One example often mentioned is the decay of a pion (a subatomic particle). The resulting electron and positron move in opposite directions. Regardless how far apart they are, they remain connected. We notice the connection only when we measure some of their properties (well aware of the influence measurement has), their spin, for example. Since the initial pion had no spin, the electron and the positron will have opposite sense spins, so that the net spin is conserved at zero. So, at distance, if the spin of the electron is clockwise, the spin of the positron is counter-clockwise. It would be out of place to enter here into the details of the discussion and the ensuing developments. Let me mention only that in support of the EPR document, Bohm (1951) tried, through his notion of a local hidden variable, to find a way for the correlations to be established at a speed lower than that of light. He wanted to save causality within quantum predications. Bohm’s attempt recalls what the community of researchers is trying to accomplish in approaching aspects of anticipation (such as prediction, expectation, forecast, etc.) with the idea that they cover the entire subject. Bell (1964, 1966) produced a theorem demonstrating that certain experimental tests could distinguish the predictions of quantum mechanics from those of any local hidden variable theory. (Incidentally, physicist Henry P. Stapp, 1991 characterized Bell’s theorem as “the greatest discovery of all science.”) Again, this recalls by analogy Rosen’s position, according to which anticipation is what (among other things) distinguishes the living from the rest of the world. It states that we can clearly discern a particular aspect of anticipation provided in some formal description or in some computer implementation from one that is natural. I mention these two episodes from a history still unfolding in order to explain that what we say in respect to nature–as Bohr defined the goal of physics–will be ultimately subjected to the test of our practical experiences. Einstein has been proven wrong in respect to his understanding of non-locality through many experiments that baffle our common sense, but his theory of relativity still stands. Spooky actions at distance are a very intuitive description of how someone educated in the spirit of physical determinism and thinking within this spirit understands how the future impacts the present, or how anticipation computes backwards from the future to the present. He, like many others, preached the need for learning “to see the world anew,” but was unable to position himself in a different consciousness than the one embodied in his theory. As I worked on this text (more precisely, after reworking a draft dated July 22, 1999), Daniel Dubois graciously drew my attention to a number of his research accomplishments pertinent to the connection between anticipation and non-locality. Indeed, over the last seven years, he has applied his mathematical formalism to quite a number of computational aspects of anticipation. Consequently, he was able to establish, by means of incursion and hyperincursion, that the computation pertinent to the membrane neural potential (used as a model of a brain) “gives rise to non-locality effects” (Dubois, 1999). His argument is in line with von Neumann’s analogy between the computer and the brain. But we are not yet beyond a first analogy (or reference). Non-locality is, in the last analysis, distance independent. Furthermore, non-locality is not a limited characteristic of the universe, but a global rule. In the words of Gribbin (1998), non-locality “cuts into the idea of the separateness of things.” If the “no-signaling” criterion (energy or information travel no faster than the speed of light) protects the “chain of cause and effect,” (effects can never happen before their causes), non-locality ensures the coherence of the universe. Reconciliation between non-locality and causality might therefore be suggestive for our understanding of anticipation. In such a case, the co-relation among elements involved in anticipation can be seen as a computation, but one different in nature from a digital computer, i.e., in a Turing machine. It follows from here that anticipation understood as co-relation–a notion we will soon focus on–must be a computation different in type than that embodied in a Turing machine. 5.1 Quantum Semiotics, Link Theory, Co-Relation Let me preface this section ascertaining that anticipation is a particular form of non-locality, which is quite different from saying that there is non-locality in anticipation. (This is what actually distinguishes my thesis from the results of Dubois.) More precisely, its object is co-relations (over space and time) resulting from entanglements characteristic of the living, and eventually extending beyond the living, as in the quantum universe. These co-relations correspond to the integrated character of the world, moreover, of the universe. Our descriptions ascertain this character and are ultimately an active constituent of this universe. We introduce in this statement a semiotic notion of special significance to the quantum realm: Sign systems not only represent, but also constitute our universe. As with qubits (information units in the quantum universe), we can refer to qusigns as particular semiotic entities through which our descriptions and interpretations of quantum phenomena are made possible. 5.1.1 The Semiotic Engine As a semiotic engine (Nadin, 1998), a digital computer processes a variety of possible descriptions of ourselves and of the universe of our existence. These descriptions can be indexical (marks left by the entity described), iconic (based on resemblance), or symbolic (established through convention). Anticipatory computation is based on the notion that every sign is in anticipation of its interpretation. Signs are not constituted at the object level, but in an open-ended infinite sign process (semiosis). In sign processes, the arrow of time can run in both directions: from the past through the present to the future, or the other way around, from the future to the present. Signs carry the future (intentions, desires, needs, ideals, etc., all of a nature different from what is given, i.e., all in the range of a final cause) into the present and thus allow us to derive a coherent image of the universe. Actually, not unlike the solution given in the Schrödinger equation, a semiosis is constituted in both directions: from the past into the future, and from the future into the present, and forward into the past. The interpretant (i.e., infinite process of sign interpretation) is probably what the standard Copenhagen Interpretation of quantum mechanics considered in defining the so-called “intelligent observer.” The two directions of semiosis are in co-relation. In the first case, we constitute understandings based on previous semiotic processes. In the second, we actually make up the world as we constitute ourselves as part of it. This means that the notion of sign has to reflect the two arrows. In other words, the Peircean sign definition (i.e., arrow from object to representamen to interpretant) has to be “reworded”: Fig. 3 Qusign definition The language of the diagram allows for such a “rewording” much better than so-called natural language: The interpretant as a sign refers to something else anticipated in and through the sign. (Peirce’s original definition of sign is, “something which stands to somebody in some respect or capacity,” 2.228.) Qusigns are thus the unity between the analytical and the synthetic dimension of the sign; their “spin” (to borrow from the description of qubits) can well describe the particular pragmatics through which their meaning is constituted. 5.1.2 Knowing in Advance The 1930 Copenhagen Interpretation of quantum mechanics (developed primarily by Bohr and Heisenberg) should make us aware of the fact that observation (as in the examples advanced by Rosen, et al), measurement (as in the evaluation of learning performance of neural networks), and descriptions (such as those telling us how a certain software with anticipatory features works) are more pertinent to our understanding of what we observe, measure, or describe than to understanding the phenomena from which they derive. To measure is to describe the dynamics of what we measure. The coherence we gain is that of our own knowledge, where dynamics resides as a description. Albeit, the anticipation chain takes the path of something that smacks of backward causality, which the established scientific community excluded for a long time and still has difficulty in understanding. Quantum particle “tunneling”–a phenomenon related to quantum uncertainty and to wave-particle duality–might explain our own existence on the planet, but we still don’t know what it means (as Feynman repeatedly stated it, verbally and in writing, 1965). Quite a number of experiments (cf. Raymond Chiao, University of California-Berkeley; Paul Kwiat, University of Innsbruck; Aephraim Steinberg, US National Institute of Standards and Technology, Maryland, among others) ended up confirming that “the way in which a photon starting out on its journey behaves” in different experimental set-ups suggests that anticipation is at work in the quantum realm. They behave (cf. Gribbin, 1999) as if they “knew in advance what kind of experiment they were about to go through.” In view of these experiments, Rosen would have a hard time trying to argue that anticipation is a property exclusive of the living. Moreover, we find in such examples the justification for quantum semiotics: “The behavior of the photons at the beam-splitter is changed by how we are looking at them, even when we have not yet made up our minds about how we are going to look at them. The computer-controlled pseudo-random layout of the device used in the experiment is anticipated by the photon,” (Gribbin and Chimsky, 1996). In other words, it is an interpretant process. I should mention here that within the relatively young field of mathematical research called link theory, a framework that generalizes the notion of causality is established in a way that removes its unidirectionality (cf. Etter, 1999). The relational aspect of this theory makes it a very good candidate for a closer look at anticipation, in particular, at what I call co-relations. 5.1.3 Coupling Strength In various fields of human inquiry, the clear-cut distinction between past, present, and future is simply breaking down. No matter how deep and broad grudges against a reductionist physical model (such as Newton’s) are, Newtonian dynamics is reversible in time, and so is quantum mechanics. The goal of producing a “unified” description of the universe can be justified in more than one way, but regardless of the perspective, coupling strength is what interests us, that is, what “holds” the “universe” together. This applies to the coherence of the human mind, as it applies to monocellular organisms or to the cosmos at large. It might be that anticipation, in a manner yet unknown to us, plays a role in the coupling of the many parts of the universe and of everything else that appears as coherent to us. Galileian and Newtonian mechanics advanced answers, which were subsequently reformulated and expressed in a more comprehensive way in the theory of relativity (special and general), and afterwards in quantum theories (quantum mechanics, quantum field theory, quantum gravity). In the mechanical universe, to anticipate could mean to pre-compute the trajectory of the moving entity seen as constitutive of broad physical reality. But the causal chain is so tight that the fundamental equation allows only for the existence of recursions (from the present to the future), which we can represent by stacks and compute relatively easily. The past is closed; the future, however, is open, since we can define ad infinitum the coordinates of the changing position of a moving entity. No guesswork: Everything is determined, at least up to a certain level of complexity. Relativity does not do away with the openness of the future, but makes it more difficult to grasp. Within black holes, inherent in the relativistic description but not reducible to them, time is cyclic. In Einstein’s curved space-time, a circular “time-line” (Etter’s pun) is no more surprising than a “circle around a cylinder in ordinary space.” This, however, leads to a cognitive problem: how to accommodate a cycle with openness. Anticipation related to this description of time is quite different from that which might be associated with a physical-mechanical description. 5.2 Possible and Probable Quantum theories, as we have suggested, pose even more difficult questions in regard to non-locality, and thus to entanglement. In this new cognitive territory, things get even more difficult to comprehend. Determinism, which means that something is (1) or is not (0) caused by something else, gives way to a probabilistic and/or possibilistic distribution: Something is caused probably (i.e., to a certain degree expressed in terms of probability, that is, statistic distribution) by something else. Or it is caused possibly (in Zadeh’s sense, 1977), which is a determination different from probability (although not totally unrelated), by something else. Probabilistic influences can be represented through a transition matrix. Given the relation between two entities A and B and their respective states, we can define a Markov chain, i.e., a transition matrix whose ijth entry is the probability of i given j. Such a chain tells us how influences are strung together (chained) and can serve as a predictive mechanism, thus covering some subset of what we call anticipation. Recently, weather satellite observations of the density of green vegetation in Africa (an indication of rainfall) were connected through such processes to the danger of an outbreak of Rift Valley Fever, in which Linthicum (1999) devised a metrics based on climate indicators for a forecasting procedure. The “black boxes” chained in such processes have a single input and a single output representing the complete state variable of the system as it changes over time. Climate and health (the risk of malaria, Hanta virus, cholera) are related in more than one way (Epstein, 1999). These examples are less probabilistic than possibilistic. If we pursue possibilities, that is, infer from a determined set of what is possible, a different form of prediction can eventually be achieved. Abductive inferences belong to this category and are characteristic of functional diagnosis procedures. Here we have an example of semiotics at work, i.e., abductions on symptoms, not really far from what Epicurus meant by prolepsis. 5.2.1 Linked Incursions For the aspects of anticipation that belong to a non-deterministic realm, we can further try to link descriptions of the form y = f(x) or z = g(w) (12a, b) Indeed, if we substitute y for w, our descriptions become y = f(x) and z = g(y), that is, z = g(f(x)) (13a, b, c) The result is a functional relation of the composed functions. Without going into the details of Etter’s theory, let me suggest that it can serve as an efficient method for encoding a variety of relations (not only in the case of the identity of two variables). If in the functional description we substitute not the variables (w with y, as shown in the example given above) but the relation between them, we reach a different level of relational encoding that can better support modeling. I even suggest that recursions, incursions, and hyperincursions can be defined for co-related events. For example:’ x(ti+1) = f[x(ti), x (ti +1), p] (14) y(tj+1) = g[y (tj), y(tj+1), r] (15) in which time in the two systems is obviously not the same (ti ¹ tj). A co-relation of time can be established, as can a co-relation among the states x(ti) and y(tj) or the two systems, through the intermediary of a third system acting as the “conductor,” or coordinator, z(ti, tj, tk), i.e., dependent upon both the time in each system and its own time metrics. To elaborate on the mathematics of linked incursions goes beyond the intentions of this paper. Let us not forget that we are pursuing an analysis of the particular ways in which anticipation takes place in the successive unified descriptions of the universe produced so far. 5.2.2 Alternative Computations In the quantum perspective of a double identity–particle and wave–trajectory is the superposition of every possible location that a moving entity could conceivably occupy. This is where recursivity, in the classic sense, breaks down. I suspect that Dubois was motivated to look beyond recursivity for improved mathematical tools, to what he calls incursion and hyperincursion, for this particular reason. But I also suspect that linked incursions and hyperincursions will eventually afford more results in dealing with various aspects of anticipation and non-locality. In respect to the explicit statement, prompted by quantum mechanics non-locality, that anticipation could be a form of computation different from that described by a Turing machine, it is only in the nature of the argument to say that a full-fledged anticipation, not just some anticipatory characteristics (prediction, planning, forecasting, etc.) is probably inherent in quantum computation. Rosen recognized early on (1972) that quantum descriptions were a promising path, although among his publications (even more manuscripts belong to his legacy, cf. 1999) there are no further leads in this direction. Efforts to transcend digital computing through quantum computation are significant in many ways. From the perspective of anticipation, I think Feynman’s concept comes closer to what we are after: understanding the quantum dynamics not by using a digital computer (as in the tradition of reductionist thinking), but by making use of the elements involved in quantum interactions. As the situation is loosely described: Nature does this calculation all the time! The same thing can be said about protein folding, a typical anticipatory process–a small increase in energy (warming up) drives the folding process back, only in order to have it repeated as the energy decreases. This process might also well qualify as an anticipatory computation, with a particular scope, not reducible to digital computation. (As a matter of fact, protein folding exceeds the complexity of digital computation.) It is an efficient procedure, this much we know; but about how it takes place we know as little as about anticipation itself. 5.2.3 Anticipation as Co-Relation (Or: Co-relation as Anticipation?) Having advanced the notion of anticipation as a co-relation, I would like to point to instances of co-relation that are characteristic of experiences of practical human self-constitution in fields other than the much researched control theory of mechanisms, economic modeling, medicine, networking, and genetic computing. There is, as Peat (undated) once remarked, a strong concern with “a non-local representation of space” in art and literature. The integration of many viewpoints (perspectives) of the same event illustrates the thought. Reconstruction (in the perception of art and literature) means the realization of a future state (describable as understanding or as coordination of the aesthetic intent with the aesthetic interpretation) in the current state of the dynamic system represented by the work of art or of writing, and by its many interpreters (open-ended process). In Descartes’ and Newton’s traditions, space and time are local: a taming of artistic expression took place. Peat claims that the “tableau,” i.e., the painting, becomes a snapshot in which “motion and change is frozen in a single instant of time. This is a form of objectivity which the concert, the novel, and the diarist express.” With the advent of relativity and quantum physics, many perspectives are overlaid. As Peat puts it, “In our century, painting has returned to the non-local order.” This holds true for writing (think about Joyce), as well as it does for the dynamic arts (performance, film, video, multimedia). Complementary elements, entangled throughout the unifying body of the work or of its re-presentation, are brought into coherence by co-relations within non-locality-based interactions. Peat goes on to show that communication “cries our for a non-local” description: source and receiver cannot be treated as separable entities. (They are linked, as he poetically describes the process, “by a weak beam of coherent light.”) Meaning—which “cannot be associated exclusively with either participant” (n.b., in communication)—could be “said to be ‘non-local’.” 6 The Relational Path to Co-Relations That computation, in one of its very many current forms or in a combination of such forms (such as hybrid algorithmic-nonalgorithmic computations), can embody and serve as a test for hypotheses about anticipation should not surprise. Neither should the use of computation imply the understanding that anticipation is ultimately a computation, that it is the only form, or the appropriate form, through which we can implement anticipation-based notions. It is an exciting but dangerous path: If everything is described as a computation—no matter how different computation forms can be—then nothing is a computation, because we lose any distinguishing reference. Epistemologically, this is a dead end. Furthermore, it has not yet been established whether information processing is a prerequisite of anticipation or only one means among many for describing it. While we could, in principle, embody anticipatory features in computer programs, we might miss a broad variety of anticipation characteristics. For instance, progress was made in describing the behavior of flocks (cf. The Swarm Simluation System at the Santa Fe Institute). But bird migration goes far beyond the modeled behavioral interrelationships. Trigger information differentials, group interaction, learning, orientation, etc. are far more sophisticated than what has been modeled so far. The immune system is yet another example of a complexity level that by far exceeds everything we can imagine within the computational model. Be all this as it may, our current challenge is to express co-relations, which appear as predefined or emerging relations in a dynamic system, by means of information processing in some computational form, or by means of describing natural entanglements. If we could reach these goals, we would effect a change in quality–from a functional to a relational model. Here are some suggestions for this approach. 6.1 Function and Relation Relations between two or among several entities can be quite complicated. A solid relational foundation requires the understanding of what distinguishes relation from function. For all practical purposes, functions (also called mappings) can be linear or non-linear. (Of course, further distinctions are also important: They can be many or single-valued, real or complex-valued, etc.) Relations, however, cover a broader spectrum. A relation of dependence (or independence) can be immediate or intermediated. It can involve hierarchical aspects (as to what affects the relation more within a polyvalent connection), as well as order or randomness. Relations, not unlike functions, can be one-to-one, one-to-many, many-to-one, many-to-many. We can define a negation of a relation, a double negation, inverse relation, etc. A full logic of relations has not been developed, as far as I know. Rudimentary aspects are, however, part of what after Peirce (1870, 1883) and Schröder (The Circle of Operation of Logical Calculus, 1877) became known as a logic of relations. Russell and Whitehead (Principia Mathematica, 1910) made further clarifications. Let us assume a simple case: xRy, in which x stands in relation to y (son of, higher than, warmer than, premise of, etc.). If we consider various aspects of the world and describe them as relationally connected, we can wind up with statements such as xR1y, zR2w, etc. In this form, it is not clear that Ri exhausts all the relations between the related entities; neither is it clear to what extent we can establish further relations between two relations Ri and Rj and thus eventually infer from their interrelationship new relations among entities that did not have an apparent relation in the first place. In a wide sense, a relation is an n-ary (n=1, 2, 3….) “connection”; a binary relation is a particular case and means that the relation xRy is true or false for a pair x,y in the Cartesian product XxY. As opposed to functions, for which we have relatively good mathematical descriptions, relations are more difficult to encode, but richer in their encodings. Their classification (e.g., inverse relation, reflexive, symmetric, transitive, equivalent, etc.) is important insofar it leads to higher orders (e.g., a reflexive and transitive relation is called a pre-ordering, while an ordering is a reflexive, transitive, and antisymmetric relation). 6.1.1 N-ary Relations If we revisit some of the examples of anticipation produced so far in the literature–Rosen’s deciduous trees, Peat’s communication as a non-local unifying process, Linthicum’s and Epstein’s metrics of weather data and disease patterns, the cognitive implications of the many competing models from which one is eventually instantiated in an action, or the hyperincursion mechanism developed by Dubois (to name but a few)–it becomes obvious that we have chains of n-ary relations: xRin y (in which Rin is a specific Ri n-ary relation); that is, in a given situation, several relations are possible, and from all those possible, some are more probable than others. To anticipate means to establish which co-relations, i.e., which relations among relations are possible, and from those, which are most probable. Anticipation is a process. It takes place within a system and we interpret it as being part of the dynamics of the system. Observed from outside the system–deciduous trees lose their leaves, birds migrate, tennis players anticipate the served ball–anticipation appears as goal-driven (teleologic). In particular, coherence is preserved through anticipation; or a different coherence among the variables of a situation is introduced (such as playing chess, or predicting market behavior). Pragmatically, this results in choices driven by possibilities, which appear as embodied in future states. The tennis ball is served and has to be returned in a well defined area–and this is an important constraint, an almost necessary condition for the game ever to take place! At a speed of over 100 miles per hour, the served ball is not returned through a reaction-based hit, but as a result of an anticipated course of action, one from among many continuously generated well ahead of the serve or as it progresses. If the serving area is increased by only 10%, chances for anticipation are reduced in a proportion that changes the game from one of resemblance and order to a chaotic, incoherent action that makes no competitive sense. The competition among the various models (all possibilities, but along a probability distribution corresponding to the particular style of the serving player) allows for a successful return, itself subject to various models and competition among them. The whole game can be seen as an unfolding chain of co-relations, i.e., a computation controlled by a range of acceptable parameters. The immune system works in a fundamentally similar fashion. Co-relations corresponding to a wide variety of acceptable parameters are pursued on a continuous basis. Acclimatization, i.e., the way humans adapt to changes in seasons, is but a preservation of the coherence of our individual and collective existence under the influence of anticipated changes in temperature, humidity, day-night cycle, and a number of other parameters, some of which we are not even aware. 6.1.2 Instantiated Co-Relations But having given the example of an unfolding sequence does not place us in the domain of non-locality. For this we need to distinguish between the diachronic and synchronic axes. A strictly deterministic explanation will always place the anticipated in the sequence of cause-and-effect/action-reaction. The tennis ball is served, days are getting shorter, a virus causes an infection–all seen as causes. In the anticipatory view, the ball is actually not yet served as the sequence of models, from among which one will become the return, started being generated. The anticipation leading to the fall of leaves is the result of a co-relation involving more than one parameter. What appears as a reaction of the immune system is actually also a co-relation involving the metabolism and self-repair function. On the one hand, we have an unfolding over time; on the other, a synchronic relation that appears as an infinitely fast process. In reality we have a co-relation, an intertwining of many relations among a huge number of variables of which we are only marginally, if at all, aware. Assuming that we have a good description of the n-ary relations R1n, R2n,… Rin, moreover that we can even “relate” relations of a different order (n=3 vs. n=4, for instance), and express this relation in a co-relation, it becomes clear that co-relations are descriptive of higher order relations. For example, two binary relations are identical when their converses are identical. In any sequences of the form xRiy, zRjw, uRkv, etc. we are trying to identify what the relation is among the various relations Ri, Rj, Rk, etc., represented by Ri Ra Rj, Rj Rb Rk, etc. The co-relations, Ra , Rb , Rg (e.g., son of and daughter of correspond to progeny, but among the co-relations, we will find similarity or distinction, among other things) can apply to the subsets of all Ri (i=1,… n) sharing a certain distinctive characteristic (such as similarity). We can further define referents (Ref) and relata (Rel), as well as a relation between referents or relata denoted as Sg (Sagitta, i.e., arrow). By no accident, the arrow can graphically suggest a dynamics from the present to the future (prediction), or the other way around, from the future to the present (anticipation). After Peirce, Tarski (1941) produced an axiomatized theory of relation that, not unlike Boolean logic, could serve as a basis for effective computations of relations and co-relations. It is quite possible that the computation of co-relation could be built around the formalism of quantum computing. In this case, we would operate on the value of the entanglement, not on the state of a particle. It is a task that invites further work. Last but not least, we invite the thought of considering relations among incursions and hyperincursions as a means of testing their descriptive power even more deeply. 6.2 Making Use of the Co-Relation Model Having advanced this model of anticipation as a form of computation, based on the dynamic generalization of models and on competition among them, and encoded in a formalism that captures co-relations (thus the spirit of non-locality), I would like to present some examples speaking in favor of an understanding of anticipation that occasionally comes close to what I have proposed above. These are not direct applications of the theory I have advanced so far, rather they are suggestive of its possible directions, if not of its meaning. 6.2.1 Anticipatory Document Caching Incidentally, anticipatory document caching with the purpose of reducing latency on Web transactions is introduced in a language reminiscent of Einstein’s observation, “Everyone talks about the speed of light but nobody ever does anything about it.” The reason for the provocative introduction is obvious: interactive HTML (i.e., text transmission through the Web) requires at least T-1 connection speeds (i.e., 1.5M bps). Once images are used, the requirement increases to T-3 lines (45M bps). Cross-country interactive screen images push the limit to 155M bps. Places such as the major cities on the West Coast of the USA (San Francisco, Los Angeles) are at least 85 milliseconds away from cities on the East Coast (Boston, New York). Interactivity under the limitations of the speed of light–assuming that we can send data at such speed and on the shortest path–is an illusion. In view of this practical observation, those involved in the design of networks, of communication protocols, of client-server access and the like are faced with the task of reducing the time between access request and delivery. Among the methods used are the utilization of inter-request bandwidth (transfer of unrequested files when no other use is made), proactive requests (preloading a client or intermediate cache with anticipated requests), optimization of topology (checking where files will be best used, combining identical requests and responses over shared links). What Touch et al (1992, 1996, 1998) accomplished is an effective procedure for providing co-relations. Evidently, they realize that such correlations cannot rely on a second channel through which requests would travel faster than the information itself. Accordingly, they initiate processes in fact independent of the communication between the client and the remote server. Such processes facilitate an anticipatory behavior based on predictive cues corresponding to the searched information. They also define where in a network of such optimization servers should be placed. I insist upon this mechanism of implementation not only because of its significance for the networked community, but primarily in view of the understanding that anticipatory computation is one of producing meaningful co-relations. The entanglement between the search process and pre-fetching data is stricto sensu a pseudo-anticipation. But so are all other implementations known to date. These are all models of possible actions, and it is quite practical to think of generating even more models as the user gets involved in a certain transaction. 6.2.2 Software Design The same idea was implemented by high-end 3D modeling software (e.g., UNIGRAPHICS), under the guidance of a better understanding of what designers can and would do at a certain juncture in visualizing their projects. The use of computation resources within such programs makes for the necessity to anticipate what is possible and to almost preclude functions and utilities that make no sense at a certain point. This is realized through a STRIM function. Instead of allowing the program to react to any and all possible courses of action, some functions are disabled. Henceforth, the functions essential to the task can take advantage of all available resources. (This is what STRIM makes possible.) It is by all practical means a pro-active concept based on realizing the co-relations within the various components of the program. 6.2.3 Agents Coordination Another aspect of co-relation is coordination. It can be ascertained that cooperative activities can take place only if a minimum of anticipation–in one or several of the forms discussed so far–is provided. This applies to every form of cooperation we can think of: commerce, work on an assembly line (where anticipation is built in through planning and control mechanisms), the pragmatics of erecting a building, the performing arts, sports. Coordination is a particular embodiment of anticipation. It can be expressed, for instance, in requirements of synchronization defined to ensure that from a set of possibilities the optimum is actually pursued. Thus, in a given situation, from a broad choice of what is possible, what is optimal is accomplished. The goal is to maximize the probability of successful cooperation. This is achieved by implementing anticipatory characteristics. I would like to mention here as an example the Robo Cup world champion, designed and implemented by Manuela Veleso, Peter Stone, and Michael Bowling (of Carnegie Mellon University). This is an autonomous agent collaboration with the purpose of achieving precise goals (in this case, winning a soccer game between robotic teams) in a competitive environment. Stated succinctly in the words of the authors, “Anticipation was one of the major differences between our team and the other teams,” (1998). Let us focus on this aspect and briefly describe the solution. What was accomplished in this implementation is a model of an unfolding soccer game. But instead of the limited action-reaction description, the authors endowed the “players” (i.e., agents) with the ability to maximize their contributions through anticipatory movements corresponding to increasing the team’s chance to execute successful passes leading to scoring. It is a relational approach: Agents are placed in co-relation (“taking into account the position of the other robots–both teammates and adversaries”) and in respect to the current and possible future positions of the ball. It is evidently a multi-objective description, that is, a dynamic set of models, with what the authors call “repulsion and attraction points.” The anticipation algorithm (SPAR, Strategic Positioning with Attraction and Repulsion) contains weighted single-objective decisions. Correctly assuming that transitions among states (i.e., choices among the various models) for each of the cooperating agents takes time (computing cost, in a broader sense), the authors implement the anticipatory feature in the form of selection procedures. The goal is to increase (ideally, to find the maximum) the probability of future collaboration as the game unfolds. The agents are given a degree of flexibility that results in adjustment supposed to enhance the probability of individual actions useful to the team. Additionally, an algorithm was designed in order to allow the “players” (team agents) to position themselves in anticipation of possible collaboration needs among teammates. Individual action and team collaboration are coordinated in anticipation (i.e., predictive form) of the actions of the opponents. At times, though, the anticipatory focus degrades to reactive moves. Less successful in the competition, but inspired by Rosen’s definition, the team of the University of Caen (France) defined the following program: “Anticipation allows the consideration of global phenomena that cannot be treated through a local reactive approach. The anticipation of the actions of the adversary or of its teammates, the anticipation of the change of the other teamplayers’ roles, the anticipation of the ball’s movements, and the anticipation of conflicts among teammates are some of the forms of anticipation that our system tries to account for,” (Stinckwich, Girault, 1999). 6.2.4 Auto-Associative Memories Along the same line of thought, it is worth mentioning that in the area of cognitive sciences, neural architectures involving auto-associative memories are used in attempts to implement anticipatory characteristics. Such memories reproduce input patterns as output. In other words, they mimic the fact that we remember what we memorize, which in essence we can describe through recursive or, better yet, incursive functions. The association of patterns of memorized information with themselves is powerful because, in remembering, we provide ourselves part of what we are looking for; that is, we anticipate. The context is supportive of anticipation because it supports the human experience of constituting co-relations. We can apply this to computer memory. Instead of memory-gobbling procedures, which hike the cost of computation and affect its effectiveness, auto-associative memory suggests that we can better handle fewer units, even if these are of a bigger size. Jeff Hawkins (1999), who sees “intelligence as an ability … to make successful predictions about its input,” i.e., as an internal measure of sensory prediction, not as a measure of behavior (still an AI obsession) applied his pattern classifier to handprinted-character recognition. The Palm Pilot�™ might sooner than we think profit from the anticipatory thought that went into its successful handwriting recognition program that Hawkins authored. 6.3 Interactivity Such and similar examples are computational expressions of the many aspects of anticipation. Their interactive nature draws our attention towards the very telling distinction between algorithmic and interaction computation. In algorithmic computation, we basically start with a description (called algorithm) of what it takes to accomplish a certain task. The computer–a Turing machine–executes a single thread operation (the van Neumann paradigm of computation) on data appropriately formatted according to syntactic constraints. As such, the process of computation is disconnected from the outside world. Accordingly, there is no room for anticipation, which always results from interaction. In the interactive model, the outside world drives the process: Agents react to other agents; robots operate in a dynamic environment and need to be endowed with anticipatory traits. Searches over networks, not unlike airline ticket purchasing and other interactive tasks, are driven by those who randomly or systematically pursue a goal (find something or let something surprise you). As Peter Wegner (1996), one of the proponents of interactive computation expresses it, “Algorithms are ‘sales contracts’ that deliver an output in exchange for an input. A marriage contract specifies behavior for all contingencies of interaction (‘in sickness and health’) over the lifetime of the object (’till death do us part’).” The important suggestion here is that we can conceive of object-based computation in which object operations (two or more) share a hidden state. Fig. 4 Interactive computation: the shared state None of the operations (or processes) are algorithmic, since they do not control the shared state, but participate in an interaction through the shared state. They are also subject to external interaction. What is of exceptional importance here is that the response of each operation to messages from outside depends on the shared state accessed through non-local variables of operations. The non-locality made possible here corresponds to the nature of anticipation. Interactive systems are inherently incomplete, thus decidable in Gödel’s sense (i.e., not subject to Gödelian strictures in respect to their consistency). Interactivity requires that the computation remain connected to the practical experiences of human self-constitution, i.e., that we overcome the limitations of syntactically limited processing, or even of semantic referencing, and reach the pragmatic level. Processes in this kind of computation are multi-threaded, open-ended, and subject to predictive or not predictive interactions. The Turing machine could not describe them; and implementation in anticipatory computing machines per se is probably still far away. This brings up, somehow by association, the question of whether the category of artifacts called programs are anticipatory by design or by their condition. The question is pertinent not only to computers, since in the language of modern genetics, programming (as the encoding of DNA, for example) plays an important role. It is, however, obvious that silicon hardware (as one possible embodiment of computers) and DNA are quite different, not only in view of their make-up, but more in view of their condition. If birds are “programmed” for their migratory behavior, then these “programs” are based on entailment schemes of extreme complexity. The same applies even more to the immune system. 6.3.1 Virtual Reality A special category of interactive computation is represented by virtual reality implementations, all intrinsically pseudo-anticipatory environments of multi-sensorial condition. In the virtual domain, a given set of co-relations can be established or pursued. Entanglement is part of the broader design. Various processes are triggered in a confined space-and-time, i.e., in a subset of the world. Non-locality is a generic metaphor in the virtual realm made possible by the integration of the human subject. Sure, as we advance towards molecular, biological, and genetic computation–where the distinction between real and virtual is less than clear-cut–we reach new levels of pragmatic integration. Evolutionary computation will probably be driven by the inherent anticipatory characteristic of the living. As designs of computation processes at the chromosome level are advanced, a foundation is laid for computation that involves and facilitates self-awareness. Interaction at this level goes deeper than interaction embodied in the examples mentioned above; that is, at this level, mind-interaction-like mechanisms are possible, and thus true anticipation (not just the pseudo type) emerges as a structural property. We are used to the representation of anticipatory processes through models that have a higher speed than the systems modeled: A rocket launch is anticipated in the simulation that “runs” ahead of the real time of the launch. The program anticipates, i.e., searches for all kinds of correlations–the proper functioning of a very complex system consisting of various elements tightly integrated in the whole. We have here, not unlike the case of data pre-fetching, or of integration through search in a space of possibilities, or of auto-associative memory, a mechanism for ensuring that co-relations are maintained above and beyond the deterministic one-directional temporal chain. The more interesting bi-directional chain is not even imaginable in such applications. The spookiness of anticipatory computation is not only reducible to the speed of interactions that worried Einstein. It also involves a bi-directional time arrow. The account given in this paper, which simultaneously occasioned the advancement of my own model, identifies the many perspectives of the possible frontier in science represented by the subject of anticipation. 7. Conclusion In order to ascertain anticipatory computation as an effective method, working models that display anticipatory characteristics need to be realized. The examples given herein can be seen as the specs for such possible models. Work in alternative computing models is illustrative of what can be done and of the return expected. Co-relations, difficult to deal with once we part from the world of first-order objects, are another promising avenue, as are possibilistic-based computations. Finally, if quantum effects prove to take place also in a world of large scale, anticipation, as entanglement (i.e., co-relation), might turn out to be the binding substratum of our universe of existence. ReferencesBarker, M. (1996) developed a class based on How to Write Horror Fiction, by William F. Nolan. Bartlett, F.C. (1951). Essays in Psychology. Dedicated to David Katz, Uppsala: Almqvist & Wiksells, pp. 1-17. Bell, John S. (1964). Physics, 1, pp. 195-200. Bell, John S. (1966). Review of Modern Physics, 38, pp. 447-452. Berry, M.J., I.H. Brivanlon, T.A. Jordan, M. Meister (1999). Nature 318, pp. 334-338. Bohm, David (1951). Quantum Theory, London: Routledge. Bohr, Niels (1987). Atomic Theory and Description of Nature: Four Essays with an Introductory Survey, AMS Press, June 1934. (See also The Philosophical Writings of Niels Bohr, Vol. 1, Oxbow Press. Descartes, René (1637). Discourss de la méthode pour bien conduire sa raison et chercher la vérité� dans les sciences, Leiden. Descartes, René (1644). Principia philosophiae. Dubois, Daniel (1992). Le labyrinthe de l���intelligence: de l’intelligence naturelle a l’intelligence fractale, InterEditions/Paris, Academia/Louvain-la-Neuve. Dubois, Daniel M. (1992). “The Hyperincursive Fractal Machines as a Quantum Holographic Brain,” CCAI 9:4, pp.335-372. Dubois, Daniel, G. Resconi (1992). Hyperincursivity: a new mathematical theory, Presses Universitaires de Liège. Dubois, Daniel M. (1996). “Hyperincursive Stack Memory in Chaotic Automata,” Actes du Symposium ECHO: Modèles de la boucle évolutive (A.C. Ehresmann, G.L. Farre, J-P.Vanbreemersch, Eds.), Université de Picardie Jules Verne, pp. 77-82. Dubois, Daniel M. (1999). “Hyperincursive McCullogh and Pitts Neurons for Designing a Computing Flip-Flop Memory,” Computing Anticipatory Systems: CASYS ’98, Second International Conference, AIP Conference Proceedings 465, pp. 3-21. Dü rrenmatt, Friedrich (1992). The Physician Sits, Grove Press. (Originally published as Die Physiker, 1962. A paperback English edition was published by Oxford University Press, 1965.) Einstein, Podolski, and Rosen Paper (1935). The Physical Review 47, pp. 777-780. Epicurus (1933). cf. Tallium Cicero, De Natura Decorum (Trans. Harry Rackham), Loeb Classical Library. Epstein, Paul R., K. Linthicum, et al (1999). “Climate and Health,” Science, July 16, 1999, pp. 347-348. Etter, Thomas (1999). Psi, Influence, and Link Theory, (manuscript dated June 11, 1999). Feyerabend, Paul (1973). Against Method, London: New Left Books. Feynman, Richard P. (1965). The Character of Physical Law, BBC Publications. Feynman, Richard P. (1982). “Simulating physics with computers,” International Journal of Theoretical Physics, 2:6/7: 467-488. Foerster, Heinz von (1976). “Objects, tokens for (eigen)-behaviors,” Cybernetics Forum, 5:3-4, pp. 91-96. Foerster, Heinz von (1999). Der Anfang von Himmel und Erde hat keinen Namen, Vienna: Döcker Verlag., 2nd ed. Garis, Hugo de (1994). An Artificial Brain: ATR’s CAM-Brain Project, New Generation Computing 12(2):215-221, 1994. Gribbin, John (1998). New Scientist, August 1998. Gribbin, John (1999). www.epunix.biols.susx.ac.uk/Home/John Gribbin/ Quantum Gribbin, John, Mark Chimsky (1996). Schrödinger’s Kittens and the Search for Reality: Solving the Quantum Mysteries, New York: Little, Brown & Co. Hawkins, Jeff (1999). “That’s Not How My Brain Works,” interview in Technology Review, July/August, pp. 76-79. Holmberg, Stig (1998). “Anticipatory Computing with a Spatio Temporal Fuzzy Model” Computing Anticipatory Systems: CASYS ’97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 419-432. Homan, Christopher (1997). Beauty is a Rare Thing, www.cs.rochester.edu:80/users/facdana/cs240_Fall97/Ass7/Chris Homan Julià, Pere (1998). Intentionality, Self-reference, and Anticipation, Computing Anticipatory Systems: CASYS ’97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 209-243. Kant, Immanuel (1781). Kritik der reinen Vernunft, 1 Auflage. (cf. Critique of Pure Reason, Translated by Norman Kemp-Smith, New York: Macmillan Press, 1781.) Kelly, G.A. (1955). The Psychology of Personal Constructs, New York, Norton. Knutson, Brian (1998). Functional Neuroanatomy of Approach and Active Avoidance Behavior, http://www.gmu.edu/departments/frasnow/abstracts_frames/abs98/Knut9812. Libet, Benjamin (1989). Neural Destiny. Does the Brain Have a Mind of Its Own?” The Sciences, March/April 1989, pp. 32-35. Libet, Benjamin (1985). “Unconscious Cerebral Initiative and the Role of Conscious in Voluntary Action,” The Behavioral and Brain Sciences, vol. 8, number 4, December 1985, pp. 529-539. Linthicum, Kenneth et al (1999). “Climate and Satellite Indicators to Forecast Rift Fever Epidemics in Kenya,” Science, July 16, 1999, pp. 367-368. Mancuso, J.C., J. Adams-Weber (1982). Anticipation as a constructive process, in C. Mancuso & J. Adams-Weber (Eds.) The Construing Person, New York, Praeger, pp. 8-32. Nadin, Mihai (1988). Minds as Configurations: Intelligence is Process, Graduate Lecture Series, Ohio State University. Nadin, Mihai (1991). Mind-Anticipation and Chaos. Stuttgart: Belser Presse. (The text can be read in its entirety on the Web at www.networld.it/oikos/naminds1.htm.) Nadin, Mihai (1997). The Civilization of Illiteracy. Dresden: Dresden University Press. Nadin, Mihai (1998). “Computers,” entry in The Encyclopedia of Semiotics (Paul Bouissac, Ed.), New York: Oxford University Press, pp. 136-138. Newton, Sir Isaac (1687). Philosophiae naturalis principia mathematica. Peat, David (undated). Non-locality in nature and cognition, www.redbull.demon.co.uk/bibliography/essays/nat-cogPeirce, Charles S. (1870). “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole’s Calculus Logic,” Memoirs of the American Academy of Sciences, 9. Peirce, Charles S. (1883). “The Logic of Relatives,” Studies in Logic by Members of the Johns Hopkins University. Peirce, Charles S. (1931-1935). The Collected Papers of Charles Sanders Peirce, Vols. I-VI (C. Hartshorne and P. Weiss, Eds.), Harvard University Press. The convention for quoting from this work is to cite volume and paragraph, separated by a decimal point: 2.226. Postrel, Virginia (1997). “Reason on Line,” Forbes ASAP, August 25, 1997. Powers, William T. (1973). Behavior: The Control of Perception, Amsterdam: de Gruyter. Powers, William T. (1989). Living Control Systems, I and II (Christopher Langton, Ed.) New Canaan: Benchmark Publications. More information at www.ed.uinc.edu/csg. Rosen, Robert (1972). Quantum Genetics, Foundation of Mathematical Biology, Vol. I, Subcellular Systems. New York/London: Academic Press, 1972. Rosen, Robert (1985). Anticipatory Systems, Pergamon Press. Rosen, Robert (1991). Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life, New York: Columbia University Press. Rosen, Robert (1999). Essay on Life Itself, New York: Columbia University Press. Sommers, Hans (1998). “The Consequences of Learnability for A a priori Knowledge in a World,” Computing Anticipatory Systems: CASYS ’97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp 457-468. Stapp, Henry P. (1991) Quantum Implications: Essays in Honor of David Bohm (B.J. Hiley & F.D. Peat, Eds.), Routledge. Stinckwich, Serge and François Girault (1999). Modélisation d’un Robot Footballeur, Memoire de DEA, Caen. See also: www.info.unicaen.fr/#girault/Memoire_dea Swarm Simulation System. See: www.swarm.org Tarski, Alfred (1941). “On the Calculus of Relations,” Journal of Symbolic Logic, 6, pp. 73-89. Touch, Joseph D. et al (1992). A Model for Latency in Communication. Touch, Joseph D. (1998). Large Scale Active Middleware.Touch, Joseph D., John Heidemann, Katia Obraczka (1996). Analysis of HTTP Performance.Touch, Joseph D. See also www.isi.edu.Veleso, Manuela, Peter Stone, Michael Bowling (1998). Anticipation: A Key for Collaboration in a Team of Agents, paper presented at the 3rd International Conference on Autonomous Agents, October 1998. Vijver, Gertrudis van de (1997). “Anticipatory Systems. A Short Philosophical Note,” Computing Anticipatory Systems: CASYS ’97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 31-47. Wegner, Peter (1996). The Paradigm Shift from Algorithms to Interaction, draft of October 14, 1996. Wildawski, Aaron B. (1988). Searching for Safety. Zadeh, Lotfi (1977). Fuzzy Sets as a Basis for a Theory of Possibility, ERL MEMO M77/12. Posted in Anticipation, LectAnticipation, Lectures/Presentations copyright © 2o19 by Mihai Nadin | Powered by Wordpress
b394ac90cb093653
Constitutional Paradox: Schrödinger’s Quantum Theory on Superposition A Quantum superposition is a fundamental principle of quantum mechanics. It refers to a property of pure state solutions to the Schrödinger equation; since the Schrödinger equation is linear, any linear combination of solutions to a particular equation will also be a solution of it. Canadian Constitution or Wampum Law
d1bae02ee1b4a25e
Open main menu Wikipedia β Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter,[1] where particles adhere to each other. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists,[2] and the Division of Condensed Matter Physics is the largest division at the American Physical Society.[3] The field overlaps with chemistry, materials science, and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.[4] A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the new, related specialty of condensed matter physics.[5] According to physicist Philip Warren Anderson, the term was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967,[6] as they felt it did not exclude their interests in the study of liquids, nuclear matter, and so on.[7] Although Anderson and Heine helped popularize the name "condensed matter", it had been present in Europe for some years, most prominently in the form of a journal published in English, French, and German by Springer-Verlag titled Physics of Condensed Matter, which was launched in 1963.[8] The funding environment and Cold War politics of the 1960s and 1970s were also factors that lead some physicists to prefer the name "condensed matter physics", which emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, over "solid state physics", which was often associated with the industrial applications of metals and semiconductors.[9] The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics.[5] References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids,[10] Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'". Classical physicsEdit Heike Kamerlingh Onnes and Johannes van der Waals with the helium liquefactor in Leiden (1908) One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity.[11] This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.[12][notes 1] In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen.[11] Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases,[14] and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.[15]:35–38 By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively.[11] Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid.[4] Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law.[16][17]:27–29 However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.[18]:366–368 In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value.[19] The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades.[20] Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas".[21] Advent of quantum mechanicsEdit Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better able to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of a quantum electron in a periodic lattice.[18]:366–368 The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935.[22] Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.[4] A replica of the first point-contact transistor in Bell labs In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developing across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current.[23] This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was experimentally discovered 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later.[24]:458–460[25] Magnetism as a property of matter has been known in China since 4000 BC.[26]:1–2 However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization.[27] Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials.[26] In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets.[28]:9 The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization.[26] The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.[26]:36–38,48 Modern many-body physicsEdit The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect.[30] After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles.[30] Landau also developed a mean field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases.[31] Eventually in 1965, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.[32] The quantum Hall effect: Components of the Hall resistivity as a function of the external magnetic field[33]:fig. 14 The study of phase transition and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s.[34] Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.[34] The quantum Hall effect was discovered by Klaus von Klitzing in 1980 when he observed the Hall conductance to be integer multiples of a fundamental constant  .(see figure) The effect was observed to be independent of parameters such as system size and impurities.[33] In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance can be characterized in terms of a topological invariable called Chern number.[35][36]:69, 74 Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of a constant. Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction.[37] The study of topological properties of the fractional Hall effect remains an active field of research. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role.[38] A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films[clarification needed] of various gases. This has more recently expanded to form the research area of spontelectrics.[39] In 2012 several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator [40] in accord with the earlier theoretical predictions.[41] Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, the existence of a topological surface state in this material would lead to a topological insulator with strong electronic correlations. Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the Band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents.[32] For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known.[42] Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon.[43] Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two non-magnetic insulators are joined to create conductivity, superconductivity, and ferromagnetism. Electronic theory of solidsEdit The metallic state has historically been an important building block for studying properties of solids.[44] The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments.[17]:90–91 This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law.[17]:101–103 In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms.[17]:48[45] In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, called the Bloch wave.[46] Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions.[47] The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it's very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly.[44]:330–337 Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory which gave realistic descriptions for bulk and surface properties of metals. The density functional theory (DFT) has been widely used since the 1970s for band structure calculations of variety of solids.[47] Symmetry breakingEdit Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.[48][49] Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.[50] Phase transitionEdit Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature. Classical phase transition occurs at finite temperature when the order of the system was destroyed. For example, when ice melts and becomes water, the ordered crystal structure is destroyed. In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.[51] Two classes of phase transitions occur: first-order transitions and continuous transitions. For the later, the two phases involved do not co-exist at the transition temperature, also called critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially.[51] These critical phenomena poses serious challenges to physicists because normal macroscopic laws are no longer valid in the region and novel ideas and methods must be invented to find the new laws that can describe the system.[52]:75ff The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.[53]:8–11 Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.[52]:11 Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry.[54] Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction. Image of X-ray diffraction pattern from a protein crystal. Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density.[55]:33–34 Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes.[55]:33–34[56]:39–43 Similarly, positron annihilation can be used as an indirect measurement of local electron density.[57] Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy.[52] :258–259 External magnetic fieldsEdit In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems.[58] Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual electrons, thus giving information about the atomic, molecular, and bond structure of their neighborhood. NMR experiments can be made in magnetic fields with strengths up to 60 Tesla. Higher magnetic fields can improve the quality of NMR measurement data.[59]:69[60]:185 Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface.[61] High magnetic fields will be useful in experimentally testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.[59]:57 Cold atomic gasesEdit The first Bose–Einstein condensate observed in a gas of ultracold rubidium atoms. The blue and white areas represent higher density. Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets.[62] In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.[63][64] In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.[65] Computer simulation of nanogears made of fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale. Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor,[4] laser technology,[52] and several phenomena studied in the context of nanotechnology.[66]:111ff Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication.[67] In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states.[67] Condensed matter physics also has important uses for biophysics, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis.[67] See alsoEdit 1. ^ Both hydrogen and nitrogen have since been liquified, however ordinary liquid nitrogen and hydrogen do not possess metallic properties. Physicists Eugene Wigner and Hillard Bell Huntington predicted in 1935[13] that a state metallic hydrogen exists at sufficiently high pressures (over 25 GPa), however this has not yet been observed. 1. ^ Taylor, Philip L. (2002). A Quantum Approach to Condensed Matter Physics. Cambridge University Press. ISBN 0-521-77103-X.  2. ^ "Condensed Matter Physics Jobs: Careers in Condensed Matter Physics". Physics Today Jobs. Archived from the original on 2009-03-27. Retrieved 2010-11-01.  3. ^ "History of Condensed Matter Physics". American Physical Society. Retrieved 27 March 2012.  4. ^ a b c d Cohen, Marvin L. (2008). "Essay: Fifty Years of Condensed Matter Physics". Physical Review Letters. 101 (25): 250001. Bibcode:2008PhRvL.101y0001C. doi:10.1103/PhysRevLett.101.250001. PMID 19113681. Retrieved 31 March 2012.  5. ^ a b Kohn, W. (1999). "An essay on condensed matter physics in the twentieth century" (PDF). Reviews of Modern Physics. 71 (2): S59. Bibcode:1999RvMPS..71...59K. doi:10.1103/RevModPhys.71.S59. Archived from the original (PDF) on 25 August 2013. Retrieved 27 March 2012.  6. ^ "Philip Anderson". Department of Physics. Princeton University. Retrieved 27 March 2012.  7. ^ "More and Different". World Scientific Newsletter. 33: 2. November 2011.  8. ^ "Physics of Condensed Matter". Retrieved 20 April 2015.  10. ^ Frenkel, J. (1947). Kinetic Theory of Liquids. Oxford University Press.  11. ^ a b c Goodstein, David; Goodstein, Judith (2000). "Richard Feynman and the History of Superconductivity" (PDF). Physics in Perspective. 2 (1): 30. Bibcode:2000PhP.....2...30G. doi:10.1007/s000160050035. Retrieved 7 April 2012.  12. ^ Davy, John (ed.) (1839). The collected works of Sir Humphry Davy: Vol. II. Smith Elder & Co., Cornhill.  13. ^ Silvera, Isaac F.; Cole, John W. (2010). "Metallic Hydrogen: The Most Powerful Rocket Fuel Yet to Exist". Journal of Physics. 215: 012194. Bibcode:2010JPhCS.215a2194S. doi:10.1088/1742-6596/215/1/012194.  14. ^ Rowlinson, J. S. (1969). "Thomas Andrews and the Critical Point". Nature. 224 (8): 541–543. Bibcode:1969Natur.224..541R. doi:10.1038/224541a0.  15. ^ Atkins, Peter; de Paula, Julio (2009). Elements of Physical Chemistry. Oxford University Press. ISBN 978-1-4292-1813-9.  16. ^ Kittel, Charles (1996). Introduction to Solid State Physics. John Wiley & Sons. ISBN 0-471-11181-3.  17. ^ a b c d Hoddeson, Lillian (1992). Out of the Crystal Maze: Chapters from The History of Solid State Physics. Oxford University Press. ISBN 978-0-19-505329-6.  18. ^ a b Kragh, Helge (2002). Quantum Generations: A History of Physics in the Twentieth Century (Reprint ed.). Princeton University Press. ISBN 978-0-691-09552-3.  19. ^ van Delft, Dirk; Kes, Peter (September 2010). "The discovery of superconductivity" (PDF). Physics Today. 63 (9): 38–43. Bibcode:2010PhT....63i..38V. doi:10.1063/1.3490499. Retrieved 7 April 2012.  20. ^ Slichter, Charles. "Introduction to the History of Superconductivity". Moments of Discovery. American Institute of Physics. Retrieved 13 June 2012.  21. ^ Schmalian, Joerg (2010). "Failed theories of superconductivity". Modern Physics Letters B. 24 (27): 2679–2691. arXiv:1008.0447 . Bibcode:2010MPLB...24.2679S. doi:10.1142/S0217984910025280.  22. ^ Aroyo, Mois, I.; Müller, Ulrich; Wondratschek, Hans (2006). "Historical introduction" (PDF). International Tables for Crystallography. International Tables for Crystallography. A: 2–5. doi:10.1107/97809553602060000537. ISBN 978-1-4020-2355-2.  23. ^ Hall, Edwin (1879). "On a New Action of the Magnet on Electric Currents". American Journal of Mathematics. 2 (3): 287–92. doi:10.2307/2369245. JSTOR 2369245. Archived from the original on 2007-02-08. Retrieved 2008-02-28.  24. ^ Landau, L. D.; Lifshitz, E. M. (1977). Quantum Mechanics: Nonrelativistic Theory. Pergamon Press. ISBN 0-7506-3539-8.  25. ^ Lindley, David (2015-05-15). "Focus: Landmarks—Accidental Discovery Leads to Calibration Standard". APS Physics. Archived from the original on 2015-09-07. Retrieved 2016-01-09.  26. ^ a b c d Mattis, Daniel (2006). The Theory of Magnetism Made Simple. World Scientific. ISBN 981-238-671-8.  27. ^ Chatterjee, Sabyasachi (August 2004). "Heisenberg and Ferromagnetism". Resonance. 9 (8): 57–66. doi:10.1007/BF02837578. Retrieved 13 June 2012.  28. ^ Visintin, Augusto (1994). Differential Models of Hysteresis. Springer. ISBN 3-540-54793-2.  29. ^ Merali, Zeeya (2011). "Collaborative physics: string theory finds a bench mate". Nature. 478 (7369): 302–304. Bibcode:2011Natur.478..302M. doi:10.1038/478302a. PMID 22012369.  30. ^ a b Coleman, Piers (2003). "Many-Body Physics: Unfinished Revolution". Annales Henri Poincaré. 4 (2): 559–580. arXiv:cond-mat/0307004v2 . Bibcode:2003AnHP....4..559C. doi:10.1007/s00023-003-0943-9.  31. ^ Kadanoff, Leo, P. (2009). Phases of Matter and Phase Transitions; From Mean Field Theory to Critical Phenomena (PDF). The University of Chicago.  32. ^ a b Coleman, Piers (2016). Introduction to Many Body Physics. Cambridge University Press. ISBN 978-0-521-86488-6.  33. ^ a b von Klitzing, Klaus (9 Dec 1985). "The Quantized Hall Effect" (PDF).  34. ^ a b Fisher, Michael E. (1998). "Renormalization group theory: Its basis and formulation in statistical physics". Reviews of Modern Physics. 70 (2): 653–681. Bibcode:1998RvMP...70..653F. doi:10.1103/RevModPhys.70.653. Retrieved 14 June 2012.  35. ^ Avron, Joseph E.; Osadchy, Daniel; Seiler, Ruedi (2003). "A Topological Look at the Quantum Hall Effect". Physics Today. 56 (8): 38–42. Bibcode:2003PhT....56h..38A. doi:10.1063/1.1611351.  36. ^ David J Thouless (12 March 1998). Topological Quantum Numbers in Nonrelativistic Physics. World Scientific. ISBN 978-981-4498-03-6.  37. ^ Wen, Xiao-Gang (1992). "Theory of the edge states in fractional quantum Hall effects" (PDF). International Journal of Modern Physics C. 6 (10): 1711–1762. Bibcode:1992IJMPB...6.1711W. doi:10.1142/S0217979292000840. Retrieved 14 June 2012.  38. ^ Quintanilla, Jorge; Hooley, Chris (June 2009). "The strong-correlations puzzle" (PDF). Physics World. Archived from the original (PDF) on 6 September 2012. Retrieved 14 June 2012.  39. ^ Field, David; Plekan, O.; Cassidy, A.; Balog, R.; Jones, N.C. and Dunger, J. (12 Mar 2013). "Spontaneous electric fields in solid films: spontelectrics". Int.Rev.Phys.Chem. 32 (3): 345–392. doi:10.1080/0144235X.2013.767109.  40. ^ Eugenie Samuel Reich. "Hopes surface for exotic insulator". Nature.  41. ^ Dzero, V.; K. Sun; V. Galitski; P. Coleman (2009). "Topological Kondo Insulators". Physical Review Letters. 104 (10): 106408. arXiv:0912.3750 . Bibcode:2010PhRvL.104j6408D. doi:10.1103/PhysRevLett.104.106408. Retrieved 2013-01-06.  42. ^ "Understanding Emergence". National Science Foundation. Retrieved 30 March 2012.  43. ^ Levin, Michael; Wen, Xiao-Gang (2005). "Colloquium: Photons and electrons as emergent phenomena". Reviews of Modern Physics. 77 (3): 871–879. arXiv:cond-mat/0407140 . Bibcode:2005RvMP...77..871L. doi:10.1103/RevModPhys.77.871.  44. ^ a b Neil W. Ashcroft; N. David Mermin (1976). Solid state physics. Saunders College. ISBN 978-0-03-049346-1.  45. ^ Eckert, Michael (2011). "Disputed discovery: the beginnings of X-ray diffraction in crystals in 1912 and its repercussions". Acta Crystallographica A. 68 (1): 30–39. Bibcode:2012AcCrA..68...30E. doi:10.1107/S0108767311039985.  46. ^ Han, Jung Hoon (2010). Solid State Physics (PDF). Sung Kyun Kwan University.  47. ^ a b Perdew, John P.; Ruzsinszky, Adrienn (2010). "Fourteen Easy Lessons in Density Functional Theory" (PDF). International Journal of Quantum Chemistry. 110 (15): 2801–2807. doi:10.1002/qua.22829. Retrieved 13 May 2012.  48. ^ Nambu, Yoichiro (8 December 2008). "Spontaneous Symmetry Breaking in Particle Physics: a Case of Cross Fertilization".  49. ^ Greiter, Martin (16 March 2005). "Is electromagnetic gauge invariance spontaneously violated in superconductors?". Annals of Physics. 319: 217–249. arXiv:cond-mat/0503400 . Bibcode:2005AnPhy.319..217G. doi:10.1016/j.aop.2005.03.008.  50. ^ Leutwyler, H. (1996). "Phonons as Goldstone bosons": 9466. arXiv:hep-ph/9609466v1 .  51. ^ a b Vojta, Matthia (16 Sep 2003). "Quantum phase transitions". Reports on Progress in Physics. 66: 2069–2110. arXiv:cond-mat/0309604  [cond-mat]. Bibcode:2003RPPh...66.2069V. doi:10.1088/0034-4885/66/12/R01.  52. ^ a b c d Condensed-Matter Physics, Physics Through the 1990s. National Research Council. 1986. ISBN 0-309-03577-5.  53. ^ Malcolm F. Collins Professor of Physics McMaster University. Magnetic Critical Scattering. Oxford University Press, USA. ISBN 978-0-19-536440-8.  54. ^ Richardson, Robert C. (1988). Experimental methods in Condensed Matter Physics at Low Temperatures. Addison-Wesley. ISBN 0-201-15002-6.  55. ^ a b Chaikin, P. M.; Lubensky, T. C. (1995). Principles of condensed matter physics. Cambridge University Press. ISBN 0-521-43224-3.  56. ^ Wentao Zhang (22 August 2012). Photoemission Spectroscopy on High Temperature Superconductor: A Study of Bi2Sr2CaCu2O8 by Laser-Based Angle-Resolved Photoemission. Springer Science & Business Media. ISBN 978-3-642-32472-7.  57. ^ Siegel, R. W. (1980). "Positron Annihilation Spectroscopy". Annual Review of Materials Science. 10: 393–425. Bibcode:1980AnRMS..10..393S. doi:10.1146/  58. ^ Committee on Facilities for Condensed Matter Physics (2004). "Report of the IUPAP working group on Facilities for Condensed Matter Physics : High Magnetic Fields" (PDF). International Union of Pure and Applied Physics. The magnetic field is not simply a spectroscopic tool but is a thermodynamic variable which, along with temperature and pressure, controls the state, the phase transitions and the properties of materials.  59. ^ a b Committee to Assess the Current Status and Future Direction of High Magnetic Field Science in the United States; Board on Physics and Astronomy; Division on Engineering and Physical Sciences; National Research Council (25 November 2013). High Magnetic Field Science and Its Application in the United States: Current Status and Future Directions. National Academies Press. ISBN 978-0-309-28634-3.  60. ^ Moulton, W. G.; Reyes, A. P. (2006). "Nuclear Magnetic Resonance in Solids at very high magnetic fields". In Herlach, Fritz. High Magnetic Fields. Science and Technology. World Scientific. ISBN 978-981-277-488-0.  61. ^ Doiron-Leyraud, Nicolas; et al. (2007). "Quantum oscillations and the Fermi surface in an underdoped high-Tc superconductor". Nature. 447 (7144): 565–568. arXiv:0801.1281 . Bibcode:2007Natur.447..565D. doi:10.1038/nature05872. PMID 17538614.  62. ^ Buluta, Iulia; Nori, Franco (2009). "Quantum Simulators". Science. 326 (5949): 108–11. Bibcode:2009Sci...326..108B. doi:10.1126/science.1177838. PMID 19797653.  63. ^ Greiner, Markus; Fölling, Simon (2008). "Condensed-matter physics: Optical lattices". Nature. 453 (7196): 736–738. Bibcode:2008Natur.453..736G. doi:10.1038/453736a. PMID 18528388.  64. ^ Jaksch, D.; Zoller, P. (2005). "The cold atom Hubbard toolbox". Annals of Physics. 315 (1): 52–79. arXiv:cond-mat/0410614 . Bibcode:2005AnPhy.315...52J. doi:10.1016/j.aop.2004.09.010.  65. ^ Glanz, James (October 10, 2001). "3 Researchers Based in U.S. Win Nobel Prize in Physics". The New York Times. Retrieved 23 May 2012.  66. ^ Committee on CMMP 2010; Solid State Sciences Committee; Board on Physics and Astronomy; Division on Engineering and Physical Sciences, National Research Council (21 December 2007). Condensed-Matter and Materials Physics: The Science of the World Around Us. National Academies Press. ISBN 978-0-309-13409-5.  67. ^ a b c Yeh, Nai-Chang (2008). "A Perspective of Frontiers in Modern Condensed Matter Physics" (PDF). AAPPS Bulletin. 18 (2). Retrieved 31 March 2012.  Further readingEdit
6282bc82597bb44e
Google+ Badge Follow by Email Search This Blog Saturday, January 24, 2015 Richard Feynman From Wikipedia, the free encyclopedia Richard Feynman Richard Feynman Nobel.jpg Born Richard Phillips Feynman May 11, 1918 New York City Died February 15, 1988 (aged 69) Los Angeles, California Residence United States Nationality American Fields Theoretical physics Institutions Cornell University California Institute of Technology Alma mater Massachusetts Institute of Technology (B.S.), Princeton University (Ph.D.) Thesis The Principle of Least Action in Quantum Mechanics (1942) Doctoral advisor John Archibald Wheeler[1] Other academic advisors Manuel Sandoval Vallarta Doctoral students F. L. Vernon, Jr.[1] Willard H. Wells[1] Al Hibbs[1] George Zweig[1] Giovanni Rossi Lomanitz[1] Thomas Curtright[1] James M. Bardeen Other notable students Douglas D. Osheroff Paul Steinhardt Robert Barro W. Daniel Hillis Known for Influences Paul Dirac Influenced Freeman Dyson Notable awards Albert Einstein Award (1954) E. O. Lawrence Award (1962) Nobel Prize in Physics (1965) Oersted Medal (1972) National Medal of Science (1979) Spouse Arline Greenbaum (m. 1941–45)(deceased) Mary Louise Bell (m. 1952–56)[2] Gweneth Howarth (m. 1960–88) (his death) Children Carl Feynman Michelle Feynman He assisted in the development of the atomic bomb during World War II and became known to a wide public in the 1980s as a member of the Rogers Commission, the panel that investigated the Space Shuttle Challenger disaster. In addition to his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing,[4][5] and introducing the concept of nanotechnology. He held the Richard Chace Tolman professorship in theoretical physics at the California Institute of Technology. Early life Richard Phillips Feynman was born on May 11, 1918, in New York City,[6][7] the son of Lucille (née Phillips), a homemaker, and Melville Arthur Feynman, a sales manager.[8] His family originated from Russia and Poland; both of his parents were Ashkenazi Jews.[9] They were not religious, and by his youth Feynman described himself as an "avowed atheist".[10] Feynman was a late talker, and by his third birthday had yet to utter a single word. He would retain a Bronx accent as an adult.[11][12] That accent was thick enough to be perceived as an affectation or exaggeration[13][14] — so much so that his good friends Wolfgang Pauli and Hans Bethe would one day comment that Feynman spoke like a "bum".[13] When Richard was five years old, his mother gave birth to a younger brother, but this brother died at four weeks of age. Four years later, Richard gained a sister, Joan, and the family moved to Far Rockaway, Queens.[8] Though separated by nine years, Joan and Richard were close, as they both shared a natural curiosity about the world. Their mother thought that women did not have the cranial capacity to comprehend such things. Despite their mother's disapproval of Joan's desire to study astronomy, Richard encouraged his sister to explore the universe. Joan eventually became an astrophysicist specializing in interactions between the Earth and the solar wind.[16] Upon starting high school, Feynman was quickly promoted into a higher math class and an unspecified school-administered IQ test estimated his IQ at 125—high, but "merely respectable" according to biographer James Gleick;[17] In 1933, when he turned 15, he taught himself trigonometry, advanced algebra, infinite series, analytic geometry, and both differential and integral calculus.[18] Before entering college, he was experimenting with and deriving mathematical topics such as the half-derivative using his own notation. In high school he was developing the mathematical intuition behind his Taylor series of mathematical operators. Feynman attended Far Rockaway High School, a school also attended by fellow laureates Burton Richter and Baruch Samuel Blumberg.[20] A member of the Arista Honor Society, in his last year in high school Feynman won the New York University Math Championship; the large difference between his score and those of his closest competitors shocked the judges. He applied to Columbia University but was not accepted because of their quota for the number of Jews admitted.[8][21] Instead, he attended the Massachusetts Institute of Technology, where he received a bachelor's degree in 1939 and in the same year was named a Putnam Fellow.[22] He attained a perfect score on the graduate school entrance exams to Princeton University in mathematics and physics—an unprecedented feat—but did rather poorly on the history and English portions.[23] Attendees at Feynman's first seminar included Albert Einstein, Wolfgang Pauli, and John von Neumann. He received a Ph.D. from Princeton in 1942; his thesis advisor was John Archibald Wheeler. Feynman's thesis applied the principle of stationary action to problems of quantum mechanics, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, laying the groundwork for the "path integral" approach and Feynman diagrams, and was titled "The Principle of Least Action in Quantum Mechanics". — James Gleick, Genius: The Life and Science of Richard Feynman Manhattan Project Feynman (center) with Robert Oppenheimer (right) relaxing at a Los Alamos social function during the Manhattan Project At Princeton, the physicist Robert R. Wilson encouraged Feynman to participate in the Manhattan Project—the wartime U.S. Army project at Los Alamos developing the atomic bomb. Feynman said he was persuaded to join this effort to build it before Nazi Germany developed their own bomb. He was assigned to Hans Bethe's theoretical division and impressed Bethe enough to be made a group leader. He and Bethe developed the Bethe–Feynman formula for calculating the yield of a fission bomb, which built upon previous work by Robert Serber. As a junior physicist, he was not central to the project. The greater part of his work was administering the computation group of human computers in the theoretical division (one of his students there, John G. Kemeny, later went on to co-design and co-specify the programming language BASIC). Later, with Nicholas Metropolis, he assisted in establishing the system for using IBM punched cards for computation. On occasion, Feynman would find an isolated section of the mesa where he could drum in the style of American natives; "and maybe I would dance and chant, a little". These antics did not go unnoticed, and rumors spread about a mysterious Indian drummer called "Injun Joe". He also became a friend of the laboratory head, J. Robert Oppenheimer, who unsuccessfully tried to court him away from his other commitments after the war to work at the University of California, Berkeley. Feynman alludes to his thoughts on the justification for getting involved in the Manhattan project in The Pleasure of Finding Things Out. He felt the possibility of Nazi Germany developing the bomb before the Allies was a compelling reason to help with its development for the U.S. He goes on to say, however, that it was an error on his part not to reconsider the situation once Germany was defeated. In the same publication, Feynman also talks about his worries in the atomic bomb age, feeling for some considerable time that there was a high risk that the bomb would be used again soon, so that it was pointless to build for the future. Later he describes this period as a "depression." Early academic career Feynman has been called the "Great Explainer".[27] He gained a reputation for taking great care when giving explanations to his students and for making it a moral duty to make the topic accessible. His guiding principle was that, if a topic could not be explained in a freshman lecture, it was not yet fully understood. Feynman gained great pleasure [28] from coming up with such a "freshman-level" explanation, for example, of the connection between spin and statistics. What he said was that groups of particles with spin ½ "repel", whereas groups with integer spin "clump." This was a brilliantly simplified way of demonstrating how Fermi–Dirac statistics and Bose–Einstein statistics evolved as a consequence of studying how fermions and bosons behave under a rotation of 360°. This was also a question he pondered in his more advanced lectures, and to which he demonstrated the solution in the 1986 Dirac memorial lecture.[29] In the same lecture, he further explained that antiparticles must exist, for if particles had only positive energies, they would not be restricted to a so-called "light cone." Caltech years The Feynman section at the Caltech bookstore Feynman did significant work while at Caltech, including research in: • Quantum electrodynamics. The theory for which Feynman won his Nobel Prize is known for its accurate predictions.[31] This theory was begun in the earlier years during Feynman's work at Princeton as a graduate student and continued while he was at Cornell. This work consisted of two distinct formulations, and it is a common error to confuse them or to merge them into one. The first is his path integral formulation (actually, Feynman couldn't formulate QED as a Feynman Integral since that involves super-Feynman Integrals which were developed by others in the 50's), and the second is the formulation of his Feynman diagrams. Both formulations contained his sum over histories method in which every possible path from one state to the next is considered, the final path being a sum over the possibilities (also referred to as sum-over-paths).[32] For a number of years he lectured to students at Caltech on his path integral formulation of quantum theory. The second formulation of quantum electrodynamics (using Feynman diagrams) was specifically mentioned by the Nobel committee. The logical connection with the path integral formulation is interesting. Feynman did not prove that the rules for his diagrams followed mathematically from the path integral formulation. Some special cases were later proved by other people, but only in the real case, so the proofs don't work when spin is involved. The second formulation should be thought of as starting anew, but guided by the intuitive insight provided by the first formulation. Freeman Dyson published a paper in 1949 which, among many other things, added new rules to Feynman's which told how to actually implement renormalization. Students everywhere learned and used the powerful new tool that Feynman had created. Eventually computer programs were written to compute Feynman diagrams, providing a tool of unprecedented power. It is possible to write such programs because the Feynman diagrams constitute a formal language with a grammar. Marc Kac provided the formal proofs of the summation under history, showing that the parabolic partial differential equation can be reexpressed as a sum under different histories (that is, an expectation operator), what is now known as the Feynman-Kac formula, the use of which extends beyond physics to many applications of stochastic processes.[33] • Physics of the superfluidity of supercooled liquid helium, where helium seems to display a complete lack of viscosity when flowing. Feynman provided a quantum-mechanical explanation for the Soviet physicist Lev D. Landau’s theory of superfluidity.[34] Applying the Schrödinger equation to the question showed that the superfluid was displaying quantum mechanical behavior observable on a macroscopic scale. This helped with the problem of superconductivity; however, the solution eluded Feynman.[35] It was solved with the BCS theory of superconductivity, proposed by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer. After the success of quantum electrodynamics, Feynman turned to quantum gravity. By analogy with the photon, which has spin 1, he investigated the consequences of a free massless spin 2 field, and derived the Einstein field equation of general relativity, but little more.[39] However, the computational device that Feynman discovered then for gravity, "ghosts", which are "particles" in the interior of his diagrams which have the "wrong" connection between spin and statistics, have proved invaluable in explaining the quantum particle behavior of the Yang–Mills theories, for example, QCD and the electro-weak theory. Mention of Feynman's prize on the monument at the American Museum of Natural History in New York City. Because the monument is dedicated to American Laureates, Tomonaga is not mentioned. In 1965, Feynman was appointed a foreign member of the Royal Society.[6][40] At this time in the early 1960s, Feynman exhausted himself by working on multiple major projects at the same time, including a request, while at Caltech, to "spruce up" the teaching of undergraduates. After three years devoted to the task, he produced a series of lectures that eventually became The Feynman Lectures on Physics. He wanted a picture of a drumhead sprinkled with powder to show the modes of vibration at the beginning of the book. Concerned over the connections to drugs and rock and roll that could be made from the image, the publishers changed the cover to plain red, though they included a picture of him playing drums in the foreword. The Feynman Lectures on Physics [41] occupied two physicists, Robert B. Leighton and Matthew Sands, as part-time co-authors for several years. Even though the books were not adopted by most universities as textbooks, they continue to sell well because they provide a deep understanding of physics. As of 2005, The Feynman Lectures on Physics has sold over 1.5 million copies in English, an estimated 1 million copies in Russian, and an estimated half million copies in other languages.[citation needed] Many of his lectures and miscellaneous talks were turned into other books, including The Character of Physical Law, QED: The Strange Theory of Light and Matter, Statistical Mechanics, Lectures on Gravitation, and the Feynman Lectures on Computation. Partly as a way to bring publicity to progress in physics, Feynman offered $1,000 prizes for two of his challenges in nanotechnology; one was claimed by William McLellan and the other by Tom Newman.[42] He was also one of the first scientists to conceive the possibility of quantum computers. Richard Feynman at the Robert Treat Paine Estate in Waltham, MA, in 1984. In 1984–86, he developed a variational method for the approximate calculation of path integrals which has led to a powerful method of converting divergent perturbation expansions into convergent strong-coupling expansions (variational perturbation theory) and, as a consequence, to the most accurate determination[44] of critical exponents measured in satellite experiments.[45] Feynman diagrams are now fundamental for string theory and M-theory, and have even been extended topologically.[47] The world-lines of the diagrams have developed to become tubes to allow better modeling of more complicated objects such as strings and membranes. Shortly before his death, Feynman criticized string theory in an interview: "I don't like that they're not calculating anything," he said. "I don't like that they don't check their ideas. I don't like that for anything that disagrees with an experiment, they cook up an explanation—a fix-up to say, ‘Well, it still might be true.'" These words have since been much-quoted by opponents of the string-theoretic direction for particle physics.[34] Challenger disaster A television documentary drama named The Challenger (US title: The Challenger Disaster), detailing Feynman's part in the investigation, was aired in 2013.[51] Cultural identification Although born to and raised by parents who were Ashkenazi, Feynman was not only an atheist,[52] but declined to be labelled Jewish. He routinely refused to be included in lists or books that classified people by race. He asked to not be included in Tina Levitan's The Laureates: Jewish Winners of the Nobel Prize, writing, "To select, for approbation the peculiar elements that come from some supposedly Jewish heredity is to open the door to all kinds of nonsense on racial theory," and adding " thirteen I was not only converted to other religious views, but I also stopped believing that the Jewish people are in any way 'the chosen people'".[53] Personal life While researching for his Ph.D., Feynman married his first wife, Arline Greenbaum (often misspelled Arlene). They married knowing that Arline was seriously ill from tuberculosis, of which she died in 1945. In 1946, Feynman wrote a letter to her, but kept it sealed for the rest of his life.[54] This portion of Feynman's life was portrayed in the 1996 film Infinity, which featured Feynman's daughter, Michelle, in a cameo role. He married a second time in June 1952, to Mary Louise Bell of Neodesha, Kansas; this marriage was unsuccessful: —Mary Louise Bell divorce complaint[2] He later married Gweneth Howarth (1934–1989) from Ripponden, Yorkshire, who shared his enthusiasm for life and spirited adventure.[36] Besides their home in Altadena, California, they had a beach house in Baja California, purchased with the prize money from Feynman's Nobel Prize, his one third share of $55,000. They remained married until Feynman's death. They had a son, Carl, in 1962, and adopted a daughter, Michelle, in 1968.[36] Feynman had a great deal of success teaching Carl, using, for example, discussions about ants and Martians as a device for gaining perspective on problems and issues. He was surprised to learn that the same teaching devices were not useful with Michelle.[37] Mathematics was a common interest for father and son; they both entered the computer field as consultants and were involved in advancing a new method of using multiple computers to solve complex problems—later known as parallel computing. The Jet Propulsion Laboratory retained Feynman as a computational consultant during critical missions. One co-worker characterized Feynman as akin to Don Quixote at his desk, rather than at a computer workstation, ready to do battle with the windmills. Feynman traveled widely, notably to Brazil, where he gave courses at the CBPF (Brazilian Center for Physics Research) and near the end of his life schemed to visit the Russian land of Tuva, a dream that, because of Cold War bureaucratic problems, never became reality.[55] The day after he died, a letter arrived for him from the Soviet government, giving him authorization to travel to Tuva. Out of his enthusiastic interest in reaching Tuva came the phrase "Tuva or Bust" (also the title of a book about his efforts to get there), which was tossed about frequently amongst his circle of friends in hope that they, one day, could see it firsthand. The documentary movie, Genghis Blues, mentions some of his attempts to communicate with Tuva and chronicles the successful journey there by his friends. Responding to Hubert Humphrey's congratulation for his Nobel Prize, Feynman admitted to a long admiration for the then vice president.[56] In a letter to an MIT professor dated December 6, 1966, Feynman expressed interest in running for governor of California.[57] Feynman took up drawing at one time and enjoyed some success under the pseudonym "Ofey", culminating in an exhibition of his work. He learned to play a metal percussion instrument (frigideira) in a samba style in Brazil, and participated in a samba school. According to Genius, the James Gleick-authored biography, Feynman tried LSD during his professorship at Caltech.[34] Somewhat embarrassed by his actions, he largely sidestepped the issue when dictating his anecdotes; he mentions it in passing in the "O Americano, Outra Vez" section, while the "Altered States" chapter in Surely You're Joking, Mr. Feynman! describes only marijuana and ketamine experiences at John Lilly's famed sensory deprivation tanks, as a way of studying consciousness.[25] Feynman gave up alcohol when he began to show vague, early signs of alcoholism, as he did not want to do anything that could damage his brain—the same reason given in "O Americano, Outra Vez" for his reluctance to experiment with LSD.[25] Feynman has a minor acting role in the film Anti-Clock credited as "The Professor".[59] Feynman had two rare forms of cancer, liposarcoma and Waldenström's macroglobulinemia, dying shortly after a final attempt at surgery for the former on February 15, 1988, aged 69.[34] His last recorded words are noted as, "I'd hate to die twice. It's so boring."[34][60] Popular legacy Actor Alan Alda commissioned playwright Peter Parnell to write a two-character play about a fictional day in the life of Feynman set two years before Feynman's death. The play, QED, which was based on writings about Richard Feynman's life during the 1990s, premiered at the Mark Taper Forum in Los Angeles, California in 2001. The play was then presented at the Vivian Beaumont Theater on Broadway, with both presentations starring Alda as Richard Feynman.[61] The principal character in Thomas A. McMahon's 1970 novel, Principles of American Nuclear Chemistry: A Novel, is modeled on Feynman.[citation needed] In February 2008 LA Theatre Works released a recording of 'Moving Bodies' with Alfred Molina in the role of Richard Feynman. This radio play written by playwright Arthur Giron is an interpretation on how Feynman became one of the iconic American scientists and is loosely based on material found in Feynman's two transcribed oral memoirs Surely You're Joking, Mr. Feynman! and What Do You Care What Other People Think?. On the twentieth anniversary of Feynman's death, composer Edward Manukyan dedicated a piece for solo clarinet to his memory.[66] It was premiered by Doug Storey, the principal clarinetist of the Amarillo Symphony. Between 2009 and 2011, clips of an interview with Feynman were used by composer John Boswell as part of the Symphony of Science project in the second, fifth, seventh, and eleventh installments of his videos, "We Are All Connected", "The Poetry of Reality", "A Wave of Reason", and "The Quantum World".[67] In a 1992 New York Times article on Feynman and his legacy, James Gleick recounts the story of how Murray Gell-Mann described what has become known as "The Feynman Algorithm" or "The Feynman Problem-Solving Algorithm" to a student: "The student asks Gell-Mann about Feynman's notes. Gell-Mann says no, Dick's methods are not the same as the methods used here. The student asks, well, what are Feynman's methods? Gell-Mann leans coyly against the blackboard and says: Dick's method is this. You write down the problem. You think very hard. (He shuts his eyes and presses his knuckles parodically to his forehead.) Then you write down the answer." [68] In 1998, a photograph of Richard Feynman giving a lecture was part of the poster series commissioned by Apple Inc. for their "Think Different" advertising campaign.[69] In 2013, the BBC drama The Challenger depicted Feynman's role on the Rogers Commission in exposing the O-ring flaw in NASA's solid-rocket boosters (SRBs), itself based in part on Feynman's book What Do You Care What Other People Think?[71][72] Selected scientific works Textbooks and lecture notes Popular works Audio and video recordings • The Feynman Lectures on Physics: The Complete Audio Collection • The Messenger Lectures, given at Cornell in 1964, in which he explains basic topics in physics. Available on Project Tuva for free (See also the book The Character of Physical Law) • The Pleasure of Finding Things Out on YouTube (1981) (not to be confused with the later published book of same title) • Elementary Particles and the Laws of Physics (1986) • Computers From the Inside Out (video) • Idiosyncratic Thinking Workshop (video, 1985) • Strangeness Minus Three (video, BBC Horizon 1964) • No Ordinary Genius (video, Cristopher Sykes Documentary) • Nature of Matter (audio)
e87b209e62a7d0ac
December 9, 2005 Intellectual flirting Transcribo dos páginas de la novela escocesa Mobius Dick, donde Andrew Crumey presenta un buen ejemplo del intellectual flirting. La novela, por lo demás, trata acerca de física cuántica, genialidad y locura, casualidad, azar y fortuna, y el Doppelgänger. His affair with Helen was a matter of chance, too. They met over lunch; not sandwiches in a seminar room, but a crowded university canteen where they found themselves sharing a table. They each placed a book beside their food as they sat down opposite one another, preparing to dine in polite, mutually oblivious silence. Hers was Doktor Faustus. His was Quantum Fields in Curved Space. Perhaps, when she finally spoke, it was simply because she'd grown tired of her quiche. "I wish I could understand that", she said suddendly, her mouth not entirely free of food, nodding in the direction of Ringer's book. "But I was always terrible at maths". "And I've never been good with novels", he replied. She looked puzzled; her smooth brow became knotted with a bemused wrinkle. "What's so difficult about reading a novel?" she asked, following the remark with a mouthful of salad while he paused over his fish and chips. "They bore me", Ringer said. "All those made-up stories about people who never existed. Where are the facts? Where are the ideas? I want a book to give me a window on a new way of thinking; not mirror things I already know". "Then perhaps you should try this novelist", she said, tapping the book beside her with the base of her fork. "Thomas Mann. Plenty of ideas there, believe me". Mann, she explained, was fond of bringing a great deal of background information into his stories. "For example", she said, "take this part here". She put down her knife and fork, lifted the German novel and leafed through it. Ringer noticed how pretty she looked, her dark hair tumbling acorss her forehead, making her resemble a serious schoolgirl before an audience of parents as she carefully located one of several parts labelled with protruding bookmarks of yellow paper. She then began to translate for him a passage that slowly assembled itself into what he recognized to be a reference to cosmological expansion, buried in a novel about a composer who invents a new kind of art and pays for it with his sanity. "Perhaps I ought to read Doktor Faustus", he said. "It sounds better than most novels". "First try The Magic Mountain", she told him. "That's about a man who goes to a tuberculosis clinic in the Swiss Alps. It came out in the nineteen twenties, and Mann got the Nobel Prize not long after". "That's a striking coincidence". "What is?" "The fact that we should be both be sitting here, you with your Thomas Mann and me with my physics. Because the main subject of my book is something called the Schrödinger equation. It's the fundamental rule of quantum mechanics. And do you know how Schrödinger found it? One Christmas in the nineteen twenties he went to a tuberculosis clinic in the Swiss Alps". Coincidences mean only whatever we want them to. Thomas Mann wrote a novel about a sanatorium, then a year after it was published, Erwin Schrödinger went to a similar establishment and made his famous discovery. Both got the Nobel Prize for their efforts and became celebrated as philosophers of their age. Is there any connection? Absolutely none. "It's an interesting parallel", she'd said to him, pushing a lettuce leaf with her fork. "Mann and Schrödinger. I might even be able to work it into my thesis". She was studying German literature in relation to philosophy. "But how do phisicists get their inspiration? I'd be fascinated to know". Helen looked at him across the table with eyes that suddendly promised more than a conversation. robespierre said... Kicks... oye leyendo en este espacio me tope con el discurso napoleonico. Me interesa, ¿me podrías mandar la fuente o la traducción a nuestra querida lengua castellana? un saludo Humberto said... Maravilloso, quiero ser un personaje de esa novela, el cocinero.
da23c34ab33812df
A Lever Long Enough: A History of Columbia's School of by Robert McCaughey By Robert McCaughey During this entire social heritage of Columbia University's tuition of Engineering and utilized technological know-how (SEAS), Robert McCaughey combines archival examine with oral testimony and modern interviews to construct either a serious and celebratory portrait of 1 of the oldest engineering faculties within the United States. McCaughey follows the evolving, sometimes rocky, and now built-in courting among SEAS's engineers and the remainder of the Columbia collage pupil physique, college, and management. He additionally revisits the interplay among the SEAS employees and the population and associations of town of latest York, the place the varsity has resided given that its founding in 1864. He compares the historic struggles and achievements of the school's engineers with their present-day battles and accomplishments, and he contrasts their educating and learn techniques to these in their friends at different free-standing and Ivy league engineering faculties. What starts as a localized historical past of a college striving to outline itself inside a college identified for its strengths within the humanities and the social sciences turns into a much wider tale of the transformation of the technologies right into a severe part of American expertise and schooling. Show description Read Online or Download A Lever Long Enough: A History of Columbia's School of Engineering and Applied Science Since 1864 PDF Best technology books The Infrared & Electro-Optical Systems Handbook. Atmospheric Propagation of Radiation This eight-volume set offers cutting-edge info on infrared and electro-optical platforms. The guide has been revised, and contours forty five chapters written by way of eighty specialists in IR/EO know-how. topics addressed contain passive EO structures and atmospheric propagation of radiation. Technology Innovations for Behavioral Education Expertise options for Behavioral EducationMary Gregerson, editorEvolving along technological advances is a brand new iteration of tech-savvy, media-attuned scholars, quite in graduate and scientific courses. yet whereas a lot is being made up of a transforming into electronic divide among academics and novices, artistic teachers are utilizing the hot digital media to layout academic suggestions which are inventive and functional, attractive and potent. Technischer Lehrgang Stoßdämpfer (German Edition) Dieser Technische Lehrgang erklärt Theorie und Funktionsprinzip des Stoßdämpfers. Er zeigt die praktische Anwendung und erklärt die verschiedenen Bauarten. Beschreibungen zu Stoßdämpfertest und Fehlererkennung komplettieren die Darstellung. Extra info for A Lever Long Enough: A History of Columbia's School of Engineering and Applied Science Since 1864 Example text The state space of a composite system is the tensor product of the states of the component physical systems. For instance, for two components A and B, the total Hilbert space of the composite system becomes HAB = HA ⊗ HB . A state vector on the composite space is written as |ψ AB = |ψ A ⊗ |ψ B . The tensor product is often abbreviated as |ψ A |ψ B , or, equivalently, the labels are written in the same ket |ψA ψB . As an example, assume that the component spaces are two-dimensional, and choose a basis in each. If we understand these states, solving the time-dependent Schrödinger equation becomes easier for any other state. 8 and Chapter 14). An excited state is any state with energy greater than the ground state. Consider an eigenstate ψα of the Hamiltonian Hψα = Eα ψα . Taking the Taylor expansion of the exponential, we observe how the time evolution operator acts on this eigenstate: e−iHt/ |ψα (0) = n = n 1 n! H i 1 n! 39) n tn |ψα (0) = e−iEα t/ |ψα (0) . 41) We define U(H, t) = e−iHt/ . This is the time evolution operator of a closed quantum system. 4). 5). Unsupervised learning is a vast field, and this chapter barely offers a glimpse. The algorithms in this chapter are the most relevant to quantum methods that have already been published. 1 Principal Component Analysis Let us assume that the data matrix X, consisting of the data instances {x1 , . . , xN }, has a zero mean. Principal component analysis looks at the eigenstructure of X X. This d × d square matrix, where d is the dimensionality of the feature space, is known as the empirical sample covariance matrix in the statistical literature. Download PDF sample Rated 4.96 of 5 – based on 42 votes
497deb5b3a72e61e
fredag 29 juli 2016 Secret of Laser vs Secret of Piano torsdag 28 juli 2016 New Quantum Mechanics 10: Ionisation Energy Below are sample computations of ground states for Li1+, C1+, Ne1+ and Na1+ showing good agreement with table data of first ionisation energies of 0.2, 0.4, 0.8 and 0.2, respectively. Note that computation of first ionisation energy is delicate, since it represents a small fraction of total energy. onsdag 27 juli 2016 New Quantum Mechanics 9: Alkaline (Earth) Metals The result presentation continues below with alkaline and alkaline earth metals Na (2-8-1), Mg (2-8-2), K (2-8-8-1), Ca (2-8-8-2),  Rb (2-8-18-8-1), Sr (2-8-18-8-2), Cs (2-8-18-18-8-1) and Ba (2-8-18-18-8-2): New Quantum Mechanics 8: Noble Gases Atoms 18, 36, 54 and 86 The presentation of computational results continues below with the noble gases Ar (2-8-8), Kr (2-8-18-8), Xe (2-8-18-18-8) and Rn (2-8-18-32-18-8) with the shell structure indicated. Again we see good agreement of ground state energy with NIST data, and we notice nearly equal energy in fully filled shells. Note that the NIST ionization data does not reveal true shell energies since it displays a fixed shell energy distribution independent of ionization level, and thus cannot be used for comparison of shell energies. New Quantum Mechanics 7: Atoms 1-10 This post presents computations with the model of New Quantum Mechanics 5 for ground states of atoms with N= 2 - 10 electrons in spherical symmetry with 2 electrons in an inner spherical shell and N-2 electrons in an outer shell with the radius of the free boundary as the interface of the shells adjusted to maintain continuity of charge density. The electrons in each shell are smeared to spherical symmetry and the repulsive electron potential is reduced by the factor n-1/n with n the number of electrons in a shell to account for lack of self repulsion. The ground state is computed by parabolic relaxation in the charge density formulation of New Quantum Mechanics 1 with restoration of total charge after each relaxation and shows good agreement with table data as shown in the figures below. The graphs show as functions of radius, charge density per unit volume in color, charge density per unit radius in black, kernel potential in green and total electron potential in cadmium red. The homogeneous Neumann condition at the interface of charge density per unit volume is clearly visible. The shell structure with 2 electrons in the inner shell and N-2 in the outer shell is imposed based on a principle of "electron size" depending on the strength of effective kernel potential, which gives the familiar pattern of  2-8-18-32 of electrons in successively filled shells as a consequence of shell volume of nearly constant thickness scaling quadratically with shell number. This replaces the ad hoc unphysical Pauli exclusion principle with a simple physical principle of size and no overlap. The electron size principle allows the first shell to house at most 2 electrons, the second shell 8 electrons, the third 18 electrons,  et cet. In the next post similar results for Atoms 11-86 will be presented and it will be noted that a characteristic of a filled shell structure 2-8-18-32- is comparable total energy in each shell, as can be seen for Neon below. The numbers below show table data of total energy in the first line and computed in second line, while the groups show total energy, kinetic energy, kernel potential energy and electron potential energy in each shell. måndag 25 juli 2016 New Quantum Mechanics 6: H2 Molecule • kernel distance = 1.44 söndag 24 juli 2016 New Quantum Mechanics 5: Model as Schrödinger + Neumann This sequence of posts presents an alternative Schrödinger equation for an atom with $N$ electrons starting from a wave function Ansatz of the form • $\psi (x,t) = \sum_{j=1}^N\psi_j(x,t)$      (1) as a sum of $N$ electronic complex-valued wave functions $\psi_j(x,t)$, depending on a common 3d space coordinate $x$ and a time coordinate $t$, with non-overlapping spatial supports $\Omega_j(t)$ filling 3d space, satisfying for $j=1,...,N$ and all time: • $i\dot\psi_j + H\psi_j = 0$ in $\Omega_j$,       (2a) • $\frac{\partial\psi_j}{\partial n} = 0$ on $\Gamma_j(t)$,   (2b) where $\Gamma_j(t)$ is the boundary of $\Omega_j(t)$, $\dot\psi =\frac{\partial\psi}{\partial t}$ and $H=H(x,t)$ is the (normalised) Hamiltonian given by with $V_k(x)$ the repulsion potential corresponding to electron $k$ defined by  and the electron wave functions are normalised to unit charge of each electron: • $\int_{\Omega_j(t)}\psi_j^2(x,t) dx=1$ for $j=1,..,N$ and all time.   (2c) The differential equation (2a) with homogeneous Neumann boundary condition (2b) is complemented by the following global free boundary condition: • $\psi (x,t)$ is continuous across inter-electron boundaries $\Gamma_j(t)$.    (2d) The ground state is determined as a the real-valued time-independent minimiser $\psi (x)=\sum_j\psi_j(x)$ of the total energy • $E(\psi ) = \frac{1}{2}\int\vert\nabla\psi\vert^2\, dx - \int\frac{N\psi^2(x)}{\vert x\vert}dx+\sum_{k\neq j}\int V_k(x)\psi^2(x)\, dx$, under the normalisation (2c), the homogeneous Neumann boundary condition (2b) and the free boundary condition (2d). In the next post I will present computational results in the form of energy of ground states for atoms with up to 54 electrons and corresponding time-periodic solutions in spherical symmetry, together with ground state and dissociation energy for H2 and CO2 molecules in rotational symmetry. In summary, the model is formed as a system of one-electron Schrödinger equations, or electron container model, on a partition of 3d space depending of a common spatial variable and time, supplemented by a homogeneous Neumann condition for each electron on the boundary of its domain of support combined with a free boundary condition asking continuity of charge density across inter-element boundaries.  We shall see that for atoms with spherically symmetric electron partitions in the form of a sequence of shells centered at the kernel, the homogeneous Neumann condition corresponds to vanishing kinetic energy of each electron normal to the boundary of its support as a condition of separation or interface condition between different electrons meeting with continuous charge density. Here is one example: Argon with 2-8-8 shell structure with NIST Atomic data base ground state energy in first line (526.22), the computed in second line and the total energies in the different shells in three groups with kinetic energy in second row, kernel potential energy in third and repulsive electron energy in the last row. Note that the total energy in the fully filled first (2 electrons) and second shell (8 electrons) are nearly the same, while the partially filled third shell (also 8 electrons out of 18 when fully filled) has lower energy. The color plot shows charge density per unit volume and the black curve charge density per unit radial increment as functions of radius. The green curve is the kernel potential and the cyrano the total electron potential. Note in particular the vanishing derivative of charge density/kinetic energy at shell interfaces. lördag 2 juli 2016 New Quantum Mechanics 4: Free Boundary Condition This is a continuation of previous posts presenting an atom model in the form of a free boundary problem for a joint continuously differentiable electron charge density, as a sum of individual electron charge densities with disjoint supports, satisfying a classical Schrödinger wave equation in 3 space dimensions. The ground state of minimal total energy is computed by parabolic relaxation with the free boundary separating different electrons determined by a condition of zero gradient of charge density. Computations in spherical symmetry show close correspondence with observation, as illustrated by the case of Oxygen with 2 electrons in an inner shell (blue) and 6 electrons in an outer shell (red) as illustrated below in a radial plot of charge density showing in particular the zero gradient of charge density at the boundary separating the shells at minimum total energy (with -74.81 observed and -74.91 computed energy). The green curve shows truncated kernel potential, the magenta the electron potential and the black curve charge density per radial increment. The new aspect is the free boundary condition as zero gradient of charge density/kinetic energy.
38a9169dc2dee5db
Patent details for Nuclear Fusion using lasers and ultradense deuterium Researchers at the University of Gothenburg and the University of Iceland are researching a new type of nuclear fusion process. This produces almost no neutrons but instead fast, heavy electrons (muons), since it is based on nuclear reactions in ultra-dense heavy hydrogen (deuterium). The new fusion process can take place in relatively small laser-fired fusion reactors fueled by heavy hydrogen (deuterium). They have gotten twice the energy from what they put in and believe they can get to 20 times the energy out as put in. Leif Holmlid filed a patent in 2012. The nuclear fusion method comprises the following steps: 1. bringing hydrogen in a gaseous state into contact with a hydrogen transfer catalyst configured to cause a transition of the hydrogen from the gaseous state to an ultra-dense state; 2. collecting the hydrogen in the ultra-dense state on a carrier configured to substantially confine the hydrogen in the ultra-dense state within a fuel collection portion of the carrier; 3. transporting the carrier to an irradiation location; and subjecting, at the irradiation location, the hydrogen in the ultra-dense state to irradiation having sufficient energy to achieve break-even in energy generation by nuclear fusion. Computational studies of the laser pulse energy required for break-even exist (see S.A. Slutz and R.A. Vesey, “Fast ignition hot spot break-even scaling”. Phys. Plasmas 12 (2005) 062702 ). These studies yield a pulse energy around 1 J at break-even. In their experiments, break-even is indeed observed at 1 J pulse energy. From break-even to an energy gain of 1000, a further factor of at least 4 in laser pulse energy is required. they conclude that the available information agrees that useful power output from nuclear fusion in ultra-dense hydrogen will be found at laser pulse energy of 4 J – 1 kJ. Such a pulse energy is feasible. By hydrogen in an “ultra-dense state” should, at least in the context of the present application, be understood hydrogen in the form of a quantum material (quantum fluid) in which adjacent nuclei are within one Bohr radius of each other. In other words, the nucleus-nucleus distance in the ultra-dense state is considerably less than 50 picometers. In the following, hydrogen in the ultradense state will be referred to as H(-1) (or D(-1) when deuterium is specifically referred to). The terms “hydrogen in an ultra-dense state” and “ultra-dense hydrogen” are used synomymously throughout this application. A “hydrogen transfer catalyst” is any catalyst capable of absorbing hydrogen gas molecules (H2) and dissociating these molecules to atomic hydrogen, that is, catalyze the reaction H2 → 2H. The name hydrogen transfer catalyst implies that the so-formed hydrogen atoms on the catalyst surface can rather easily attach to other molecules on the surface and thus be transferred from one molecule to another. The hydrogen transfer catalyst may further be configured to cause a transition of the hydrogen into the ultradense state if the hydrogen atoms are prevented from re-forming covalent bonds. The mechanisms behind the catalytic transition from the gaseous state to the ultra-dense state are quite well understood, and it has been experimentally shown that this transition can be achieved using various hydrogen transfer catalysts, including, for example, commercially available so-called styrene catalysts, as well as (purely) metallic catalysts, such as Iridium and Palladium. It should be noted that the hydrogen transfer catalyst does not necessarily have to transition the hydrogen in the gaseous state to the ultra-dense state directly upon contact with the hydrogen transfer catalyst. Instead, the hydrogen in the gaseous state may first be caused to transition to a dense state H(1), to later spontaneously transition to the ultra-dense state H(-1). Also in this latter case has the hydrogen transfer catalyst caused the hydrogen to transition from the gaseous state to the ultra-dense state. At a rate of one carrier foil per second carrying 3 µg ultra-dense deuterium giving fusion ignition, the energy output of a power station using this method is approximately 1 MW. This would use 95 g of deuterium per year to produce 9 GWh, or one 5 liter gas bottle at 100 bar standard pressure. By using several lines of target carrier production, several laser lines or a higher repetition rate laser, the output of the power station can be scaled relatively easily. Catalytic conversion The catalytic process may employ commercial so called styrene catalysts, i.e. a type of solid catalyst used in the chemical industry for producing styrene (for plastic production) from ethylene benzene. This type of catalyst is made from porous Fe-O material with several different additives, especially potassium (K) as so called promoter. The function of this catalyst has been studied in detail. The catalyst is designed to split off hydrogen atoms from ethyl benzene so that a carbon-carbon double bond is formed, and then to combine the hydrogen atoms so released to hydrogen molecules which easily desorb thermally from the catalyst surface. This reaction is reversible: if hydrogen molecules are added to the catalyst they are dissociated to hydrogen atoms which are adsorbed on the surface. This is a general process in hydrogen transfer catalysts. We utilize this mechanism to produce ultra-dense hydrogen, which requires that covalent bonds in hydrogen molecules are not allowed to form after the adsorption of hydrogen in the catalyst. The potassium promoter in the catalyst provides for a more efficient formation of ultra-dense hydrogen. Potassium (and for example other alkali metals) easily forms so called circular Rydberg atoms K*. In such atoms, the valence electron is in a nearly circular orbit around the ion core, in an orbit very similar to a Bohr orbit. At a few hundred °C not only Rydberg states are formed at the surface, but also small clusters of Rydberg states K N *, in a form called Rydberg Matter (RM). This type of cluster is probably the active form of the potassium promoter in normal industrial use of the catalyst. The clusters K N * transfer part of their excitation energy to the hydrogen atoms at the catalyst surface. This process takes place during thermal collisions in the surface phase. This gives formation of clusters H N * (where H indicates proton, deuteron, or triton) in the ordinary process also giving the K N * formation, namely cluster assembly during the desorption process. If the hydrogen atoms could form covalent bonds, molecules H2 would instead leave the catalyst surface and no ultra-dense material could be formed. In the RM material, the electrons are not in s orbitals since they always have an orbital angular momentum greater than zero. This implies that covalent bonds cannot be formed since the electrons on the atoms must be in s orbitals to form the normal covalent sigma (σ) bonds in H2. The lowest energy level for hydrogen in the form of RM is metallic (dense) hydrogen called H(1), with a bond length of 150 picometer (pm). The hydrogen material falls down to this level by emission of infrared radiation. Dense hydrogen is then spontaneously converted to ultra-dense hydrogen called H(-1) with a bond distance of 2-4 pm depending on which particles (protons, deuterons, tritons) are bound. This material is a quantum material (quantum fluid) which probably involves both electron pairs (Cooper pairs) and nuclear pairs (proton, deuteron or triton pairs, or mixed pairs). These materials are probably both superfluid and superconductive at room temperature, as predicted for ultra-dense deuterium and confirmed in recent experiments. Review of Scientific Instruments – Efficient source for the production of ultradense deuterium D(-1) for laser-induced fusion (ICF) (2011) A novel source which simplifies the study of ultradense deuterium D(-1) is now described. This means one step further toward deuterium fusion energy production. The source uses internal gas feed and D(-1) can now be studied without time-of-flight spectral overlap from the related dense phase D(1). The main aim here is to understand the material production parameters, and thus a relatively weak laser with focused intensity less than a trillion watts per square centimeter is employed for analyzing the D(-1) material. The properties of the D(-1) material at the source are studied as a function of laser focus position outside the emitter, deuterium gas feed, laser pulse repetition frequency and laser power, and temperature of the source. These parameters influence the D(-1) cluster size, the ionization mode, and the laser fragmentation patterns Journal of Fusion Energy – Ultradense Deuterium – F. Winterberg 2010 An attempt is made to explain the recently reported occurrence of ultradense deuterium as an isothermal transition of Rydberg matter into a high density phase by quantum mechanical exchange forces. It is conjectured that the transition is made possible by the formation of vortices in a Cooper pair electron fluid, separating the electrons from the deuterons, with the deuterons undergoing Bose–Einstein condensation in the core of the vortices. If such a state of deuterium should exist at the reported density of about 130,000 g/cm3, it would greatly facility the ignition of a thermonuclear detonation wave in pure deuterium, by placing the deuterium in a thin disc, to be ignited by a pulsed ultrafast laser or particle beam of modest energy. Physics Letters A – Ultra-dense deuterium and cold fusion claims – F. Winterberg 2010 An attempt is made to explain the recently reported occurrence of 14 MeV neutron induced nuclear reactions in deuterium metal hydrides as the manifestation of a slightly radioactive ultra-dense form of deuterium, with a density of 130,000 g/cm3 observed by a Swedish research group through the collapse of deuterium Rydberg matter. In accordance with this observation it is proposed that a large number of deuterons form a “linear-atom” supermolecule. By the Madelung transformation of the Schrödinger equation, the linear deuterium supermolecule can be described by a quantized line vortex. A vortex lattice made up of many such supermolecules is possible only with deuterium, because deuterons are bosons, and the same is true for the electrons, which by the electron–phonon interaction in a vortex lattice form Cooper pairs. It is conjectured that the latent heat released by the collapse into the ultra-dense state has been misinterpreted as cold fusion. Hot fusion though, is here possible through the fast ignition of a thermonuclear detonation wave from a hot spot made with a 1 kJ 10 petawatt laser in a thin slice of the ultra-dense deuterium. About The Author
0f4064b2bcae62ec
From classical to quantum chaos You remember last "Physics Friday" post: We showed a phase space map of a billiard in a magnetic fields  as an example for a system showing classical chaos. As in classical Hamiltonian systems, the concept of integrability can also be applied to quantum systems. In the form of quantum numbers conserved quantities in quantum mechanics are related to symmetries. These quantum numbers are the eigenvalues of operators that "generate" the transformation under which the system is invariant (i.e., the operator counterparts of the classical conserved quantities). The word quantum chaos might let us think of unpredictable behavior in quantum phenomena, but this is not the truth. In fact the solution of the linear Schrödinger equation cannot behave chaotically in the same way as that of quantized classically chaotic systems. Quantum chaos means a quantum manifestation of chaos in deterministic classical mechanics.(Nakamura in Quantum Chaos and Quantum Dots) This means the manifestation of chaos is the common characteristic phenomena of quantized classical chaotic systems. One way to see the effect of the classical dynamics is to study local statistics of the energy spectrum, such as the level spacing distribution P(s) which is the distribution function of nearest neighbour spacings as we run over all energy levels. A dramatic insight of quantum chaos is given by the universality conjectures for P(s): • If the classical dynamics is integrable, then P (s) coincides with the corresponding quantity for a sequence of uncorrelated levels (the Poisson ensemble) with the same mean spacing. • If the classical dynamics is chaotic, then P(s) coincides with the corresponding quantity for the eigenvalues of a suitable ensemble of random matrices. Level spacing distribution for the energy spectrum of a quantum particle in a circular region vs. the level spacing distribution for Gaussian Unitary Ensemble, Gaussian Orthogonal Ensemble, and Poisson, respectively. Kriecherbauer T et al. PNAS 2001;98:10531-10532 Note that not a single instance of these conjectures is known, in fact there are counterexamples, but the conjectures are expected to hold “generically”, that is unless we have a good reason to think otherwise.
b5d835109ba11b2c
Viewpoint: A Quantum Constellation Qian Niu, Department of Physics, The University of Texas at Austin, Austin, TX 78712-0264, USA and International Center for Quantum Materials, Peking University, Beijing, China Published June 11, 2012  |  Physics 5, 65 (2012)  |  DOI: 10.1103/Physics.5.65 +Enlarge image Figure 1 (Left) APS/Carin Cain; (Right) Courtesy P. Bruno/ESRF Figure 1 (Left) In a spin-1/2 or other two-level system, any state can be built from a superposition of a spin-up and spin-down state (blue arrows). This superposition corresponds to a point on a unit sphere, defined by the vector n. (Right) Planar projection of the north hemisphere of a spherical representation of Majorana stars for a spin J=25. The cloudy regions represent the probability distribution for the spin wave function. In Bruno’s picture, this probability distribution is analogous to the density of a classical gas that is repelled from the Majorana stars. The Italian physicist Ettore Majorana, who disappeared in 1938, is now widely recognized for inventing the notion of a fermionic particle, the Majorana fermion, which has the strange property of being its own antiparticle [1]. What is perhaps less well known is that he also developed a natural and exact representation of a quantum spin [2], which inspired Julian Schwinger to create the bosonic representation often used today in the theoretical study of quantum spin systems [3]. In Majorana’s representation, a general spin state corresponds to a configuration of points on a sphere, a picture that makes a high dimensional Hilbert space easier to comprehend. In Physical Review Letters, Patrick Bruno of the European Synchrotron Radiation Facility in Grenoble, France, has revived this representation by developing an intuitive and systematic method for calculating the physical properties of the spin state, such as its energy, and following the spin’s evolution in time [4]. Bruno’s intuitive approach has the potential to guide our understanding of quantum systems with multiple components, which are vast and complex, yet increasingly, the focus of quantum engineering and quantum information. For the elementary case of a spin 1/2, or any quantum two-level system, Felix Bloch established [5] that an arbitrary pure state can be represented by a point on a unit sphere. In this picture (Fig. 1, left), a spin-up state corresponds to the north pole and a spin-down state corresponds to the south pole. A superposition of these two states corresponds to a point on the sphere, defined by the unit vector n. The reason is geometrical: in addition to a nonessential normalization factor and an overall phase, a superposition state is specified by the relative amplitude and phase of its two components, and these two parameters can be mapped to the spherical coordinates θ and ϕ, which specify the direction of n. The point at n could equally well be viewed as an eigenstate with eigenvalue +1/2 for a spin oriented along n. For this simple spin-1/2 case, the Majorana star is defined as the lone point on the sphere in the opposite direction to n. However, a spin-J state with J>1/2, does not, in general, correspond to an eigenstate of the spin vector in any direction. In addition to an overall phase, it takes 2J complex numbers to specify a state, and these numbers can’t be represented by a single point on a sphere. Eigenstates with eigenvalue +J for the spin component in each direction, which are called spin coherent states, constitute only a tiny subset of all possible quantum states. Nevertheless, one can represent a general quantum state as a linear superposition of the spin-coherent states. The superposition coefficient, given by the scalar product of the general state with a coherent state, is then a complex wave function of n. Loosely speaking, this complex wave function represents the probability amplitude of finding the spin in that direction. Majorana’s insight was that this wave function has 2J points—or “vortices”—on the Bloch sphere where the wave function vanishes. Apart from an overall phase and normalization factor, the wave function is completely specified by the positions of these vortices. Specifically, the spin state has zero overlap with the spin coherent state along the directions of the vortices. In his new work, Bruno calls these vortices the Majorana stars. In this picture, the eigenstate of the spin angular momentum m along direction n corresponds to m stars coinciding at –n with the remaining 2J-m stars coinciding at n. Configurations other than these special “antipodal” distributions give all the other spin quantum states in the vast Hilbert space. Bruno has found a systematic method of using diagrammatic rules to calculate the physical quantities, such as the energy, in terms of the Majorana star positions. The trick is to make a connection between the probability distribution of the spin wave function on the Bloch sphere and the density of a classical gas of independent particles (Fig. 1, right). Using this analogy, calculating the spin wave function is equivalent to calculating the distribution of the “gas” at thermal equilibrium, assuming it is repelled from the Majorana stars by a potential that varies logarithmically with distance. Moreover, Bruno has found that there is also an artificial magnetic field in the radial direction that has an intensity given by the gas density, as if each gas particle carries a quantum unit of magnetic flux. Together with the spin energy, this artificial magnetic field endows the Majorana stars with a classical dynamics that mirrors exactly the evolution of the quantum spin as governed by the Schrödinger equation. As Bruno shows, Majorana stars are similar to vortices in other systems, such as a two-dimensional superfluid or an electron gas in a quantum Hall state. These vortices all have a life of their own, and feel an artificial magnetic field proportional to the particle density. There is a simple explanation of this common phenomenon. When a particle moves around a vortex once, the quantum wave function accumulates a phase of 2π. In other words, the particle feels an Aharonov-Bohm-like flux at the vortex position. Since “moving around” is relative, one could look at this from the vortex’s perspective and say it feels the same flux at the particle. This is indeed confirmed in the present case by a direct calculation of the geometric phase for cyclic motion of the vortices [6]. The dynamics of the Majorana stars differs from, but is closely related to, the canonical form of the Hamiltonian dynamics that are taught in classical mechanics. A classical phase space is spanned by a set of generalized coordinates and their conjugate momenta, with velocities and forces simply given by the gradients of the Hamiltonian function. In the case of the quantum spin, the phase space is defined by the coordinates of the 2J Majorana stars on the Bloch sphere, but the dynamics is in general noncanonical. Had one chosen an orthogonal basis, such as the 2J+1 eigenstates of the spin component along a fixed axis, the Schrödinger equation would imply a canonical dynamics, with the probabilities and phases playing the roles of canonical momenta and coordinates [7]. The canonical structure remains true even for the relative probabilities and phases after separating out the total probability (which is always normalized anyway) and a nonessential overall phase [8]. These canonical variables can be straightforwardly related to the Majorana stars, but the relation does not necessarily form a canonical transformation, rendering the dynamics of the latter noncanonical. The Majorana representation and Bruno’s new development may turn out to be very useful in systems such as molecular magnets [9] or multilevel qubits [10]. Much of our intuition in the past was derived from a semiclassical picture, with the stars clustered together and moving according to the Landau-Lifshitz equation. Now we can visualize the quantum space in full detail and with ease by letting the stars spread out and wander on the Bloch sphere. The author would like to acknowledge support from DOE-DMSE (DE-FG03-02ER45958), NBRPC (2012CB-921300), NSFC (91121004), and the Welch Foundation (F-1255). 1. F. Wilczek, “Majorana Returns,” Nature Phys. 5, 614 (2009). 2. E. Majorana, Nuovo Cimento 57, 43 (1932). 3. J. Schwinger, US Atomic Energy Commission, Report NYO-3071 (1952); later published in Quantum Theory of Angular Momentum, edited by L. C. Biendenharn and H. Van Dam (Academic Press, New York, 1965). 4. P. Bruno, Quantum Geometric Phase in Majorana’s Stellar Representation: Mapping onto a Many-Body Aharonov-Bohm Phase, Phys. Rev. Lett. 108, 240402 (2012). 5. F. Bloch and I. I. Rabi, “Atoms in Variable Magnetic Fields,” Rev. Mod. Phys. 17, 237 (1945). 6. See also J. H. Hannay, “The Berry Phase for Spin in the Majorana Representation,” J. Phys. A 31, L53 (1998). 7. A. Heslot, “Quantum Mechanics as a Classical Theory,” Phys. Rev. D 31, 1341 (1985). 8. J. Liu, B. Wu, and Q. Niu, “Nonlinear Evolution of Quantum States in the Adiabatic Regime,” Phys. Rev. Lett. 90, 170404 (2003). 9. P. C. E. Stamp, E. M. Chudnovsky, and B. Barbara, Quantum Tunneling of Magnetization in Solids, Int. J. Mod. Phys. B 6, 1355 (1992). 10. A. R. Usha Devi, Sudha, and A. K. Rajagopal, “Majorana Representation of Symmetric Multiqubit States,” arXiv:1103.3640. About the Author: Qian Niu Qian Niu Qian Niu is a Trull Centennial Professor of Physics at The University of Texas at Austin, from which he is currently on leave to serve as the director of the International Center for Quantum Materials at Peking University. He has worked on the theories on quantum Hall effects, quasicrystals, ultracold atoms, spin transport, and graphene materials, with an emphasis on topological and geometric phase effects in quantum transport. He obtained a B.S. from Peking University and a Ph.D. from the University of Washington at Seattle, and did postdoctoral work at the University of Illinois at Urbana-Champaign and the University of California at Santa Barbara before joining the faculty of UT Austin in 1990.
2ae6144808714671
Critique of a Metaphysics of Process demarcating things as they are conventionally & ultimately moving along swimmingly Part I : General Metaphysics I pay homage to Je Tsongkhapa inspired by the profound philosophies of Protector Nâgârjuna, Emmanuel Kant & Alfred North Whitehead with thanks to Willem of Ockham to the living trees "Neither from itself nor from another, nor from both, nor without a cause, does anything whatever, anywhere arise." Nâgârjuna : Mûlamadhyamakakârikâ, 1. Kant, I. : Critique of Practical Reason, conclusion, on Kant's tombstone. "'Creativity' is the universal of universals characterizing ultimate matter of fact. It is the ultimate principle by which the many, which are the universe disjunctively, become the one actual occasion, which is the universe conjunctively. It lies in the nature of things that the many enter into complex unity." Whitehead, A.N. : Process and Reality, § 31. Natura abhorret a vacuo. This work drew direct inspiration from Nâgârjuna's Fundamental Verses on the Middle Way, the Mûlamadhyamakakârikâ (2th century CE), Emmanuel Kant's Kritik der reinen Vernunft (1781), the Critique of Pure Reason, and Alfred North Whitehead's Process and Reality (1927/28). Knowing this, the reader is exempt, except for the odd quotation, from the burden of the usual battery of academic references. For the prolegomena to this metaphysics of process, consult Criticosynthesis. "Be empty, that is all." Part I : General Metaphysics. Chapter 1 Introducing Metaphysics & Ontology. 1.1 Metaphysics & Science. Object-Dependent, Imaginal & Perspectivistic Styles. § 1 The Issue of Style. § 2 Deriving Style from Objects. § 3 Imaginal Style. § 4 Creative Unfoldment. § 5 The Style of Process Metaphysics. B. Opposition, Reduction & Discordant Truce. § 1 The Axiomatic Base. § 2 Monism, Dualism or Pluralism. § 3 Critical Epistemology. § 4 Conflictual Model. § 5 Reductionist Model. § 6 Metaphysics & Criticism. § 7 Discordant Truce. § 8 The Objectivity of Sensate Objects. § 9 The Subjectivity of Mental Objects. § 10 Direct & Indirect Experience. C. Towards a Critical Metaphysics. § 1 Transcendence & Interdependence in Ancient Egyptian Sapience. § 2 Greek Metaphysics : Transcendence & Independence. § 3 Metaphysics in Monotheism & Modern Philosophy. § 4 The Fundamental Question : Being or Knowing ? § 5 Precritical Metaphysics : Being before Knowing. § 6 Critical Metaphysics : Knowing before Being. D. Valid Science & Critical Metaphysics. § 1 Transcendental Logic of Cognition. § 2 The Correct Logic of Scientific Discovery. § 3 The Validity of Scientific Knowledge. § 4 Casus-Law : the Maxims of Knowledge Production. § 5 Metaphysical Background Information. E. Thinking Metaphysical Advancement. § 1 The Mistake of Absolute Relativism. § 2 Logical Advance. § 3 Semantic Advance. 1.2 Immanent Metaphysics. A. The Limit-Concepts of Reason. § 1 Finite Series and the Infinite. § 2 Modern Limit-concepts : Soul, World, God. § 3 The Copernican Revolution. § 4 The Linguistic Turn. § 5 Epistemological Limit-concepts : the Real & the Ideal. § 6 Metaphysical Limit-concepts : Conserver, Designer & Clear Light*. B. Diversity & Convergence in the World. § 1 Horizontal : Variety, Display & the World-Ground. § 2 Vertical : Unity, Intelligent Focus & Clear Light*. C. The Alliance between Science & Immanent Metaphysics. § 1 The Alliance of Form. § 2 The Alliance of Contents. § 3 Empirical Significance & Heuristic Relevance. D. Limitations of a Possible Speculative Discourse. § 1 Logical Limitations. § 2 Semantic Limitations. § 3 Cognitive Limitations. 1.3 Transcendent Metaphysics. A. Jumping Beyond Limit-Concepts. § 1 Epistemological Transgressions. § 2 Ontological Transgressions. § 3 Transgressive Metaphysics. § 4 Deconstruction & the Margin. B. Conceptuality & Non-Conceptuality. § 1 Conceptual Thought. § 2 Ante-rational Regressions. § 3 Meta-rational Transgressions. § 4 Direct Experience & Cognitive Nonduality. § 5 The Epistemological Status of Nonduality. C. Irrationality versus Poetic Sublimity. § 1 Features of Irrationality. § 2 Transcendence & Art. 1.4 Ontology. A. Defining Ontology without the Nature of Being. § 1 Place of Ontology in Metaphysics. § 2 Objects of Ontology : What is There ? § 3 Monist, Dualist & Pluralist Ontologies. § 4 Failures of Materialist & Spiritualist Ontologies. § 5 Voidness, Emptiness & Interdependence. B. Perennial Ontology ? § 1 The Ancient Egyptian Nun & the Pre-Socratic Ground. § 2 The Logic of Being & the Fact of Becoming. § 3 Greek & Indian Concept-Realism. § 4 The Tao. § 5 The Dharma Difference. C. Against Foundation & Substance. § 1 The Definition of Substance. § 2 The Münchausen Trilemma. § 3 Avoiding Dogmatism & Scepticism. D. Conventional Appearance. § 1 What is Truly There ? § 2 Concepts, Determinations & Conditions. § 3 Valid but Mistaken Appearance. § 4 Appearance, Illusion & the Universal Illusion. Ultimate Suchness/Thatness. § 1 The Katapathic View on the Ultimate. § 2 The Apophatic View on the Ultimate. § 3 The Non-Affirmative Negation. § 4 Fabricating the Ultimate : Ending Reified Concepts. § 5 The Direct Experience of the Unfabricated Ultimate. F. The Ontological Scheme. § 1 Event & Actual Occasion. § 2 Efficient & Final Determinations of an Actual Occasion. § 3 The Three Operators. § 4 Aggregates of Actual Occasions. § 5 Individualized Societies. § 6 Panpsychism versus Panexperientialism. § 7 The God* of Process Ontology. Chapter 2 Mental Pliancy & its Enemies. 2.1 Definition of Mind. § 1 Awareness, Attention & Cognizing. § 2 Attending Objects of the Mind. § 3 Cognizing Clarity. § 4 The Luminous Clear Ground of the Mind. 2.2 The Continuum of the Mindstream. § 1 A Non-Spatial Continuum : Temporal and Atemporal. § 2 Symmetry & Symmetry-Break. § 3 From Happiness to Peace. 2.3 The Non-Physical Domain of the Mind. § 1 Physical & Non-Physical Domains. § 2 Upward Causation. § 3 Downward Causation. 2.4 Ego, Self & Selflessness. § 1 Defining the Self. § 2 The Two Foci. § 3 Prehending the Selfless Mindstream. 2.5 Closed & Open Minds. § 1 The Logic of Self-Cherishing Affliction. § 2 Ontologizing the Self. § 3 The Closed Entropic Mind. § 4 The Mind of Enlightenment. § 5 The Open Negentropic Mind or Pliant Mind. Chapter 3 Metaphysics as Conventional Truth. 3.1 Conventional Truth as Valid but Mistaken. § 1 The Validation of Knowledge. § 2 The Relevance of Authority. § 3 The Significance of Experimentation. § 4 The Worth of Conventional Truth. § 5 How Conventional Truth Fails. § 6 Substantial Instantiation in Conventional Truth. 3.2 The Argument of Illusion. § 1 The Argument from the Senses. § 2 The Argument from the Rational Mind. § 3 The Argument from Speculative Reason. § 4 The Argument from Chapter 4 Speculative Thought. 4.1 Speculating on the Subject. § 1 The Identity System. § 2 Desubstantializing Identity. § 3 From Ego-Circularity to Bi-modality. § 4 Selflessness : Clearing the Ontic Self. § 5 The Immortal Nature of the Clear Light* Mind. 4.2 Speculating on the Object. § 1 The Object of Creative thought. § 2 Process : Clearing the Ontic World-System. Chapter 5 Preparing the Mind for Ultimate Truth. 5.1 Defining Ultimate Truth ? § 1 Primordial Ground : the Undifferentiated. § 2 Unbounded Wholeness : the Absolute. § 3 Things As They Are : the Non-Deceptive. § 4 The Duality of the Simultaneous. 5.2 Conceptual Fallacies and Nondual Un-saying. § 1 Against the Ontology of the One Truth. § 2 Against the Ontology of Awakening. § 3 The Case of the Unity of the World-Ground. § 4 The Positive Power of Silence. 5.3 Generating Right View. § 1 Identifying the Culprit. § 2 Eliminating Concepts with Concepts. § 3 Contrived Realization of Full-Emptiness. § 4 Uncontrived Uncovering of the Clear Light Nature of Mind*. Chapter 6 The Logic of Ultimate Analysis. 6.1 Conventional & Ultimate Analysis. § 1 Conventional Analysis. § 2 Ultimate Analysis. § 3 The Dangers of Ultimate Analysis. 6.2 The Formal Presuppositions of Ultimate Analysis. § 1 The Rules of Formal Logic. § 2 Identity. § 3 Duality & Negation. § 4 Excluded Third. 6.3 The Primitives. § 1 The Logical Operators. § 2 The Quantifiers. § 3 Objects § 4 Differentiating Object § 5 The Apprehending Self. 6.4 The Six Instantiations. § 1 Instantiation. § 2 Logical Instantiation. § 3 Functional Instantiation. § 4 Conventional Instantiation. § 5 Substantial Instantiation. § 6 Ultimate (or Absolute) Instantiation. § 7 Mere Existential Instantiation. 6.5 The Logic of the Selflessness of Persons. § 1 Establishing Ontic Identity. § 2 Ontic Identity is not Identical with Mind or Body. § 3 Ontic Identity is not Different from Mind or Body. § 4 No Ontic Identity is Found. 6.6 The Logic of the Selflessness of Phenomena. § 1 Establishing Ontic Sensate Objects. § 2 Ontic Sensate Objects are not Identical with their Parts. § 3 Ontic Sensate Objects are not Different from their Parts. § 4 No Ontic Sensate Objects are Found. 6.7 Conclusions. § 1 Main Problems of Substantiality. § 2 Non-Substantiality. § 3 Dependent Arising & Process. § 4 One Object with Two Epistemic Isolates. § 5 Simultaneity : No Two Worlds & No Two States. 6.8 Full-Emptiness. § 1 Fullness of Earth : Process Nature of Objects & Subjects. § 2 Emptiness of Heaven : Absence of Inherent Existence. § 3 Pansacralism. Chapter 7 Preparative Ontology. 7.1 The Question of Questions : Why Something ? § 1 Nothingness : Relative & Absolute. § 2 Nothingness : Passive & Active. § 3 Nihilism of the Void. § 4 Active Nothingness : Potentiality & Virtuality. 7.2 Operating Something. A. Matter : Particles, Fields & Forces or Hardware. § 1 The Quantum Plasma of the World-Ground. § 2 The Beginning of the Conventional Spacetime Continuum. § 3 Elementary Particles, Fields & Forces. B. Information : Encoded Data or Software. § 4 Information : Informing & Informed. § 5 Informed Information. § 6 The Matter - Information Bond. § 7 Life as Complexification. C. Consciousness : Meaning & Intent or Userware. § 8 Meaning & Intent. § 9 Evolutionary Panexperientialism & Degrees of Consciousness. § 10 The Spiritual Features of Consciousness. 7.3 Towards a Metaphysics of Specifics. Part II : Metaphysics of Specifics. Chapter 8 Metaphysical Cosmology. Chapter 9 Metaphysical Cybernetics. Chapter 10 Metaphysical Biology. Chapter 11 Metaphysical Anthropology. Chapter 12 Metaphysical Mysticism. Chapter 13 Metaphysical Theology. Thematic Glossary Alphabetic Glossary Ontology, the study of what is shared in common by all existing things (individual phenomena or aggregates of phenomena), is the capstone of the love of wisdom. Ontology is also the final speculative goal of metaphysical inquiry, both immanent (within the world) and transcendent (beyond the world). Despite all possible variety between things (including conscious persons endowed with a human mind), ontology tries to lay bare the ultimate nature of all phenomena. In vain, no doubt. But in the process of this conceptual understanding, coarse, subtle & very subtle arguments are put in place. As history unfolds, "this" metaphysics of existence or process will inevitably be replaced by "that" better one. In the dialogue between these versions, complex new scientifically inspiring concepts may emerge. This inexhaustible complexification being one of the hallmark in the history of valid ontologies. To further the speculative branch of philosophy or "metaphysics", the normative disciplines of logic, epistemology, ethics & aesthetics have to influence the mind first (cf. Criticosynthesis, 2008). One has to know the principles of correct reasoning (transcendental logic), the norms of valid knowledge (theory of knowledge), the maxims of knowledge-production (practice of knowledge), the judgments pertaining to the good (the just, fair & right), providing maxims for what must be done (ethics) and judgments pertaining to what we hope others may imitate, namely the sublime beauty of excellent & exemplary states of matter (aesthetics). These normative disciplines foster precise goals. Logic targets correctness, epistemology validity, ethics goodness and aesthetics unity & harmony. If left out, any metaphysical enterprise will be insufficiently capacitated. Then, to conceptualize the ultimate nature of phenomena, speculative depth & extend will be lacking. When Andronikos of Rhodos (first century CE) classified the works of Aristotle, he placed the books on First Philosophy next to fourteen treatises on Nature ("ta physika"). These were called "ta meta ta physika" or "the (books) coming after the (books on) nature" and so "metaphysics" was born. The names given to Aristotle's First Philosophy vary from "theology", "wisdom" (Aristotle), "transphysics" (Albertus Magnus), "hyperphysics" (Simplicius) to "paraphysics" ... Playing on the ambiguity in "meta", it was also taken to connote what is beyond sensible nature. For Aristotle, metaphysics was (a) the science of first principles and causes, (b) the science of being as being and (c) theology. Did Andronikos leave us a hint ? Should metaphysics, before starting to speculate, always first study physics, i.e. "science" ? Without the backbone of valid empirico-formal knowledge, can the totalizing conceptualization sought be anything other than incomplete and/or flawed ? Or worse : irrational nonsense ? § 1 Correctness and Validity. Logic and epistemology teach how formal & empirico-formal knowledge and its advancement are possible. They focus on conventional truth, the functional reality of sensate & mental objects shared with other knowers. Logic rules the architecture of conceptual reasoning. Classical logic identifies truth-values, fallacies, consistency, coherence & completeness. It does so using the principles of identity, non-contradiction and excluded third. It invites us not to multiply entities needlessly (parsimony), and mostly builds on symmetry. Non-classical logics develop systems of inference based on alternative principles, needed to understand special objects like action, possibility or quantum phenomena. They teach us to work with paradox, absence of coherence or contradiction (para-consistency). Applying formal logic to the question of the ultimate nature of phenomena, or ultimate analysis (cf. Ultimate Analysis, 2009), either results in the conceptualization of the absence of substantial reality of oneself (the identitylessness of persons), or in realizing the lack of such in phenomena (the substantiality of phenomena). Reifying the generic idea of emptiness ("shûnyatâ", cf. Emptiness Panacea, 2008) leads to nihilism, affirming self and non-self are unsubstantial and so nothing at all, not even functional. Nihilism may however disguise itself as essentialism, for nothingness itself, as an underlying void thing (hypokeimenon), is at times -paradoxically- turned into the nonexistent "stuff" out of which phenomena emerge. Rejecting ultimate analysis for no good reason leads to eternalism, affirming substantial existence of self and/or non-self. Here the many contradictions of substantialism are waved away. Clearly a mind analyzing reality by way of logic alone is not equipped to realize the wisdom unveiling ultimate truth. Nihilism and eternalism are weak positions. A mind thinking along those lines is not pliant, but either self-cherishing or self-annihilating. Both tendencies point to incorrect ontological presuppositions. Self-grasping has not come to an end. If any metaphysical insight is to be gained, both mentalities must be abandoned. Defining valid knowledge, epistemology demarcates the rules of true knowledge in terms of valid empirico-formal statements of fact. Indeed, science is validated by experimentation & argumentation, metaphysics by the latter only (cf. Criticosynthesis, 2007, chapter 2). Rejecting substantialism, metaphysical speculation on process takes full advantage of the logic of ultimate analysis. Metaphysics of process is not a mummification of ideas, the denial of diversity and impermanence (of life itself) for the sake of a fictional stability, a "Jenseits" of imagination or a Platonic world. Nor is it the reification of the objective & subjective conditions of all possible thought. Metaphysics of process accepts the results of logic & science : absolutely isolated objects cannot be found. Metaphysics is not a speculation on substance but on process. The latter encompasses both absence & presence, both the arising, the abiding and the ceasing. It does so because only interdependent, impermanent phenomena arise, abide & cease. These define a stream of functionally interrelated happenings (efficient) & moments of creative advance (finative). Ergo, metaphysics is not equated with idealism or Platonism. Nor with realism or Aristotelism. § 2 The Pliancy of Mind. Insofar as our speculative pursuit does not consider the link between, on the one hand, the existential conditions defining the egological state of the mind of Homo normalis and, on the other hand, the capacity to cognize the ultimate nature of things, ontology is nothing more than a subtle ornament of dry metaphysical intellectualism. Moreover, as someone describing how to swim without ever having touched water, these intellectual activities miss target. The conclusions reached may be accepted or rejected without ceasing the existential dissatisfaction, both emotional & intellectual, present in those in which these ideas and their speculative study happen. This considerably handicaps philosophy to serve practical goals ! How to outline a philosophy of the practice of philosophy ? Even if the necessity of the arguments cannot be obscured or confused, their influence on sensation, thought, feelings, action and consciousness is insufficient to actually liberate the mind from mental obscurations & afflictive emotions by unconcealing ultimate (absolute) truth, i.e. by the direct, non-conceptual & nondual experience of the ultimate nature of phenomena. Without considering the maieutic dimension assisting the liberation of human beings, without engaged thinking, speculative philosophy does not really take off. Then barren academia is what is left. The Socratic intent opposes this exclusive hold of philologistics on the pursuit of wisdom. Wisdom encompasses theory & practice. Philosophy is both abstract & concrete. Both form a unity. Integral part of society, the practice of philosophy is an integral part of the philosophical life, involving theory & practice. To self-realize the spirit of wisdom, the philosophical life calls for spirituality, or the art & science of addressing consciousness, thought, affect, volition and sensation.  The necessity of such a "practice of philosophy" derives from wisdom's aim to reduce alienation & disorientation, promoting : 1. (inter) subjectivity : self-awareness, consciousness of being a subject, a someone rather than a something, the First Person perspective, ability to interact constructively with others, implying openness, flexibility, respect, tolerance etc. ; 2. cognitive autonomy : capacity to think rationally, to self-reflect, to be able to formulate ideas independent of traditions, to integrate instinct & intuition in a rational way, dialogal capacity, using arguments to posit opinions ; 3. balance : awareness of the importance of happiness, justice and fairness in thought, feelings and actions, communicational action, building peace, mutual understanding & acting against extremes like fundamentalism, nihilism, virulent scepticism, closed dogmatism, exaggerated relativism, blind materialism, naive spiritualism, etc. ; 4. intellectual & spiritual concentration, sharpness & depth : creative capacity, originality, inventivity, novelty, and the spiritual exercises aiming at wholeness, leading to increased mental concentration, intellectual acuteness and extend of interests and compass. The abortion of the practice of sapience by the academy is a recent one. Let it be rejected. In the light of criticism (cf. Criticosynthesis, 2007), academic philosophy is both theoretical & practical : • theoria : the philosophy of the theory of philosophy : (1) normative (judicial) : logic, epistemology, ethics & aesthetics ; (2) descriptive (speculative) : metaphysics incorporating an ontology of process, cosmos, life & the human ; (3) philologistics : history of philosophy, hermeneutics, linguistics, philosophy of language, neurophilosophy, etc. • the praxis of wisdom : the philosophy of the practice of philosophy : namely the tools to apply philosophy in society, in terms of psychology, sociology, politics, economy, advising, counselling, self-realization, etc. The "theoretical" activity of the philosopher (reading, writing, teaching) needs to be complemented by the "practical" activity of the same philosopher (listening, advising, mediating, meditating). Without sufficient input from real-life & real-time philosophical crisis-management, the mighty stream of wisdom becomes a serpentine of triviality and/or a valid pestilence of details (pointless subtlety). This is in-crowd philosophy, elitist and mostly useless. Working together, contemplation (theory) and action (practice) allow wisdom to deepen by the touch of a wide spectrum of different types of interactions. Risks are taken. Opposition & creativity (novelty) must be given their "random" place in the institutional architecture. One must teach philosophers how to integrate themselves in the economical cycle. Kept outside the latter, state-funded philosophy rises. This situation does not benefit philosophy, quite on the contrary. Moreover, it also limits the possibility to enter wisdom, the mind witnessing the ultimate nature of all possible phenomena. In doing so, the absence of a practice of philosophy hinders the development of philosophical thought, both in terms of its depth & extend. Indeed, when human beings in general, and philosophers in particular, only care for their own petty little kingdoms of trust and act accordingly, their minds miss the necessary pliancy to grasp, assimilate & integrate the truth concerning the nature of phenomena. The ability of being flexed without breaking comes from being able to adapt to different conditions. This capacity goes hand in hand with a calm mind cherishing others more than oneself. By eliminating sapiental activities, the stuck, strained mind -accommodating itself first- loses the capacity to swim even if it wishes to do so. And so when these minds do enter the water, their views immediately drown. Only through love & compassion, the wish & activity of causing all possible others to be happy, does the mind slowly open up. Only with this pliant & calm mind may one try to take in the wisdom realizing the ultimate nature of things. Conventional truth, in particular functional interdependence, the bedrock of method & compassion, must be grasped before the wisdom witnessing phenomena as they are may be discovered. One cannot philosophize with a mind stuck in the mud of self-cherishing & self-grasping. Doing so leads to nothing, except to a waste of precious time & good effort. It furthers no merit, reward or solution. Ethics is thus a necessary prerequisite for the ultimate success of metaphysics in general and ontology in particular. It is an integral ingredient to make the mind capable to embark with conventional truths, bringing them to the other shore of ultimate truth. Without compassion, wisdom cannot be found. Without wisdom, compassion is inefficient, i.e. does not liberate from suffering. Reason without ethics is crippled, like seeing with one eye. Such reasonings are like poison in a pot, prompting the smart to put nothing in it ... Of course, without compassion, ultimate truth can be approached with the same ultimate analysis, but the resultant view on ultimate nature, lacking the functionality of conventional reality, will be nihilist. Then, ultimate nature becomes a "noumenon", a limit-concept, not a nondual discovery of the natural light of the mind. Emptiness is reduced to a void viewed as an absolute nothingness, a mere formal condition. To miss this important methodological role of ethics in ontology, so stressed in the East, particularly in the Buddhadharma, is to neglect the actual practice of philosophy to the advantage of a crippled theoretical definition of "wisdom" as "a theory on the totality of being". This mere academism is sterile, even in its subtlety. It does not lead to liberation, while ultimate truth sets us free from the obscurations caused by the "Three Poisons" of ignorance (not knowing ultimate nature), desire (grasping & clinging to sensate and/or mental objects) & hatred (rejecting & disliking this or that sensate and/or mental object). § 3 Unity & Harmony of Mind. The mind is able to bring the manifold under unity. This by integrating separate units and by realizing a creative unison, an upgrading synthesis. This "Gestalt" is more than the mere sum of its components. Complex aggregates ensue. And these are not disordered or amorph. On the contrary, architectures and meaningful patterns are everywhere apparent in Nature. Even electrons are ruled by Pauli's exclusion principle, by which no two electrons can be in the same state or configuration at the same time, accounting for the observed patterns of light emission from atoms. The organization or code of these architectures is called "information". Just as noise is not sound, well-formed information has little redundancy. A compression of structure it aimed at ; an elegance, a symmetry, a play of interdependence and interrelationality, highlighting the togetherness of all phenomena of Nature. These conditions are not part of logic per se, but pertain to aesthetics, the judgment of beauty (cf. Criticosynthesis, 2007, chapter 5). The metaphysical mind needs more than correctness, validity & pliancy. A totalizing, all-encompassing intent must be addressed. Tí tò ón ? or What is being ? already refers to this over-arching zeal of metaphysics. While for Aristotle, this "being" was "substance", process metaphysics posits actual occasions to be the final building-blocks of that which is, i.e. the set of all possible phenomena. The totality of possibilities is thus aimed at. These are necessarily organized, for, to be arguable, metaphysics needs to be well-formed. Here forms of harmonization enter the picture, for information is an architecture, i.e. a structure, form or mathematical representation of process. Harmony is a relatively continuous balance between phenomena, whereas forms of harmony are archetypal ways of balancing out. Balance can be weird, awkward, odd, strange, bizarre, absurd, grotesque, bombastic, exaggerated etc. This evokes the pair symmetry and symmetry-break. Absence of balance is not a form of harmony, but a disharmonization. In a mind able to speculate well, unity & harmony interlock. This final element capacitates the mind sufficiently to entertain metaphysics. Accepting correct reasoning and valid scientific knowledge, training mental pliancy and fostering what brings unity & harmony, the mind is open, deep, sharp, acute & clear enough to be at peace and speculate. without an object nothing is thought without a subject nobody thinks necessity of reality idea of the REAL Factum Rationis necessity of ideality idea of the IDEAL Epistemology : knowledge - truth object of thought subject of thought research-cell Practical opportunistic logic the production of provisional, probable & coherent empirico-formal, scientific knowledge we can hold for true Ethics : volition - the good coordinated movement & its consequence Transcendental free will duty - calling Theoretical intent - conscience family - property - the secular state Practical persons - health - death Esthetics : feeling - the beautiful states of sensate matter or mental objects Transcendental consciousness pursuing excellence & exemplarity sensate & evocative aesthetic features Theoretical aesthetic attitude objective art, social art, revolutionary art, magisterial art Practical subjective art, personal art, psycho-dynamic art, total art judgments pertaining to what we hope others may imitate, namely the beauty of excellent & exemplary states of matter § 4 Ultimate, Non-Relative Truth. On the one hand, ontology, in absolute terms, aims to know the ultimate nature of phenomena. Thus it reveals an ultimate truth. But, as we shall see, transcendent metaphysics is nondual, ineffable & apophatic (without tales). It merely points (as does poetry) to something it cannot denote, designate or conceptualize. This experience cannot be explained in positive terms, for the infinite cannot be contained by the finite. Easily broken by absolute truth, words are unworthy vessels. Conceptualizing it, we are left with nothing else but a non-affirmative negation. Needing a conceptualized framework, only immanent metaphysics is left. But the quest of its periphery does not unveil a transcendent Creator fashioning Nature "ex nihilo", but an intelligent "pneuma" or "Anima Mundi", an Architect limited by the creative freedom at work in Nature. To cognize this ultimate mode of existence, i.e. the natural, spontaneous, uncontrived, unfabricated abiding of phenomena, is to know their ultimate truth. So ultimate truth is not an "entity" above or behind object, as in Platonism, but merely their natural condition, i.e. their suchness/thatness or that what they are in and by themselves. Although open to all conscious beings, this absolute state of each and every object is -unfortunately- realized by only a few. The reason is simple : to eliminate the countless delusions obscuring the mind is very difficult, demanding the ongoing discipline of study, reflection & meditation. The latter asks for renunciation, compassion and the wisdom-mind realizing the true nature of phenomena. Hence, transcendent metaphysics is not impossible sui generis, but because of ignorance (emotional & mental obscurations). On the other hand, ontology does not turn its back to the conventional truth of the nominal, "common sense" hallucination of designated & named appearances. Quite on the contrary. The ultimate exists conventionally. There are no "ultimate objects" next, behind or beyond conventional objects, but each and every conventional object has a veiled, obscured, concealed absolute nature which is its ultimate truth. Unbridled by criticism, these misrepresentations of conventionality lead to mistaken, confused agreements, opinions, notions, ideas and/or theories relating how things exist as "real", "extra-mental" substances "out there" (as in realism), and/or as "ideal" "intra-mental" selves "in here" (as in idealism). But this does not invalidate them as conventional, functional objects. They are valid but mistaken. As the object of science, conventional truth designates the factual nature of relative, fallible empirico-formal statements arrived at through experiment & argument. In an immanent metaphysics, conventional truth, on the basis of such statements of fact, speculates about being as such, the cosmos, life and consciousness. Being non-factual, it only argues (cannot test). Its arguments are more than mere perspectives, but slowly realize greater and greater clarity and comprehensiveness, finally moving to the periphery of its field. But these same conventional objects, valid insofar as their functions are concerned, are mistaken because they conceal their true nature. Indeed, the absence of their own-power is not eliminated by conventional analysis, quite on the contrary. Physical objects are defined as isolated & separate. A pivotal mental object like the self is reified and so deemed substantial ! To cognize designated facts conceptually, is to know conventional or relative truth. Although available through reason, it too -as valid science- is a rare occasion.  Conventional falsehoods are far more common and more easy to adhere to. Science aims at valid but mistaken empirico-formal truth. Immanent metaphysics tries to acquire valid but mistaken conventional speculative truth. Transcendent metaphysics points to ultimate truth, beyond validation and unmistaken. § 5 Conventional, Relative Truth. Either entities are posited in a conventional act of cognition or are revealed by the wisdom realizing the ultimate status of phenomena, implying an uncommon, implicit, hidden dimension of the mind, one able to discover and perceive ultimate nature directly. This unveils the absolute, the ultimate, i.e. things as they are. This is their suchness or thatness. Because conventionally, human beings only cognize by way of conceptual mentation and/or sensation, the conditions determining mental & sensate objects co-determine what we identify as a conventional entity. We thus prelimit objects in terms of the physical laws of perception, the psychophysical phenomenon of sensation & the known cognitive mechanisms of positing mental objects. Conventional truth must accept the theory-ladenness of our observations, for a lot of objectivity does not eliminate subjectivity. In fact, the latter cannot be taken away. As long as object and/or subject are not hypostatized, duality by itself poses no problem. But conventional truth does reify both object and subject of cognition. Reified duality is always problematic. Conventional, conceptual thought and its relative truth splits every act of cognition up in two independent & separate sides, juxtaposing a subject, defined as an object-possessor, and an object, posited or designated by this endowed cogito. However, both are mutually dependent and inclusive. Without subject, there is no object to possess. Without object, there is no positing, grasping, designating cogito. Moreover, all subjects are also the object of another subject. In such a discursive, concept-based cognition, objects, phenomena, events or knowable entities are either sensate or mental. Sensate objects are the product of perception and cognitive interpretation. Thoughts, feelings, volitions and consciousness are mental. The difference becomes very clear when considering dreams. Although the eye-sense is dormant, visual images do appear. These are purely mental and are not caused by changes in the sensitive surface of the retina. Relative, conventional truth, or valid knowledge about how things appear (not how they are in and by themselves), is the concern of science. The latter involves the "craft of magical conjurations", manipulating determinations, conditions, functions & interdependent (re)organizations. Although science may be sophisticated, we cannot, with the standards of the conceptual mind, discover the ultimate nature of things, but only their appearance. By designating, conceptual thought fixates objects. In doing so, it allows objects to appear as existing from their own side, as substances existing according to their own characteristics. Even insofar as theoretical epistemology identifies this ontological illusion and eradicates its confusing influence on the foundations of epistemology itself (refusing to ground the possibility of knowledge in either object or subject), epistemology endorses the methodological need of applied epistemology to take objects and subjects at their face value, i.e. as if existing from their own side, independent from each other, without referent, as commonsense dictates. This reifying characteristic of conceptual thought & science tries -in vain- to transform interdependent & impermanent phenomena into fixed, permanent, independent & substantial things.  Although criticism must conceive facts as theory-independent (if not, by lack of object, knowledge itself would be impossible), conceptually, we can never be sure this to be actually the case or not. Only non-conceptual, nondual wisdom-mind is able to definitively discern or apprehend ultimate truth, the suchness and thatness of all phenomena. Conceptual thought implies categorial designation and this goes for both sensate & mental objects. Hence, it cannot be conceptually known whether conventional objects, existing in a conventional, functional way, on top of this also exist according to their own essence, nature, existentials or substantial characteristics. They are designated dependent on their parts, for they are all compounds. Theoretical epistemology must accept facts also represent reality-as-such, but is not equipped to take a look "behind the surface of the mirror" and then conceptualize how things are there. Concepts are not able to pierce the membrane or lift the veil. Concepts are concealers. Therefore, although objects exist in a conventional way and thus make things work, both realist & idealist metaphysics -claiming sensate objects represent reality-as-such and/or mental objects represent the true order of things as they are- are conventional falsehoods, and this despite their playing a considerable role in applied epistemology (cf. methodological idealism versus methodological realism), as well as in the commonsense, nominal view of valid science (not to speak of invalid conventional knowledge). Confused because of its concordia discors, conceptual reason (in the pre-rational, proto-rational, formal, critical & creative modes of cognition) eclipses ultimate truth and designates objects to appear as this-or-that. Producing consensual illusions, science is not equipped to unveil reality-as-such. On the level of sensate objects, conceptual interpretation is never put to rest, while mental objects are merely (inter)subjective, and thus dependent of context. Moreover, reifying duality is never relinquished. To end this confusion, the ante-rational antecedents as well as the mechanisms of conceptual cognition must be understood, eradicating ontological illusion. This is the work of critical thought. It yields the relative truth of duality, as between sensate & mental objects, between experiments (testing) & discussions (argumentation), between the theory-independent & the theory-laden side of facts, between correspondence & consensus as aspects of conventional truth, etc. In creative thought, i.e. in the mode of conceptual cognition used in immanent metaphysics, the gradual process of ultimate analysis, resulting in an approximate ultimate -the identity between interdependence and absence of substance- causes the ontological, substantializing, reifying strongholds of the duality of mind to finally collapse, opening it up to the discovery of the nondual, immediate, actual wisdom-mind apprehending ultimate nature. This wisdom is not produced, created or caused, but always given as the fundamental (naked) potential of the mind. Although ultimate analysis does not necessarily produce or cause wisdom mind, it works as a valid and potent preparation, as a gateway to ultimate truth, an approximate, contrived (fabricated) ultimate. This is the ultimate purification of the conceptual mind. Being introduced to wisdom mind is however immediate and thus non-gradual, uncontrived and direct. So, as often overlooked, on the side of the subject of experience, the via negativa yields a positive result : the possibility of a nondual dimension of mind beyond reason (formal & critical) & intellect (creative). On the side of the object, this puts down a clear message : the ultimate nature of phenomena lies beyond the conceptual and can therefore not be grasped in any of the conceptual modes of thought (pre-rational, proto-rational, formal, critical & creative). One needs to move ahead ! This begs the question : What ultimate truth does wisdom-mind know ? § 6 Ultimate Analysis. In absolute terms, ontology claims to establish the ultimate truth about every existing thing, which is the same as directly cognizing the ultimate state of phenomena. This ultimate truth, the wisdom realizing what truly is, takes as object things as they are, not as they appear. As Kant and neo-Kantianism have demonstrated, reason & science cannot penetrate further than appearing phenomena. Hence, from their side, ultimate truth is a "noumenon". So although conceptual thought is not equipped to penetrate reality-as-such, it is nevertheless possible to gradually loosen its grip on cognition and prepare the ultimate experience of the suchness of all things, including the mind. This is not an introduction, but a springboard establishing an approximate ultimate. It is a purification of the mind. Dissolving the hard core of conventionality and facilitating the non-gradual "jump" to the other shore of wisdom, certain conceptualizations end the reifying procedures (instantiations) of discursive thought. Thanks to this, the direct perception of the luminous core of the mind, the ultimate, always present nature of mind and of phenomena, may rise. This ultimate analysis (cf. Ultimate Logic, 2009), the gateway to ultimate truth, is a cognitive protocol aiming to arrest the reification of the conceptual mind by means of concepts and, with the greatest subtlety, prepare nonduality, or absence of concepts. It accommodates the direct experience of the ultimate nature of phenomena, of things as they are by way of a totalizing generic idea of the ultimate nature of phenomena. Regard this as an ultimate logic using concepts to clear away the reifying ground, preparing the realization of the unsubstantial, process-based nature of phenomena, i.e. their lack of intrinsic "thingness" or substance ("shûnyatâ") manifesting as their interdependence or dependent-arising ("pratîtya-samutpâda"). This unity of emptiness and dependent-arising is defined as "full-emptiness", a term encompassing all possible phenomena. In this ultimate logic, concepts pertaining to the fundamental structures of conceptual thought are manipulated to end reifying conceptualization, collapsing the conceptual mind under the weight of its reifications, demolishing substantializing theories & mental constructions. As certain conceptualizations stop the confused mind (as it were purifies is), leading to (not causing) the direct experience of the ultimate, it is hence not the case conceptuality always engenders illusion. Otherwise, science and rationality would play no vital role in the cognitive emancipation of human beings, while they do. Ultimate analysis stops the substantial instantiation, and so makes the conceptual mind exclusively run on the existential instantiation. In such a mind, sensate & mental objects do rise, but without any further conceptual elaboration. They arise, abide and cease and without any further ado. § 7 Immanent & Transcendent Metaphysics. Ontology operates a "double coding" : (a) Ultimate truth or unmistaken absolute knowledge, the object of transcendent metaphysics, unveils the ultimate nature of phenomena. Directly perceived by an absolute, nondual, ineffable cognition (called "prehension"), it reveals wisdom at its highest possible level, the level of suchness/thatness. (b) Relative truth or valid but mistaken conventional knowledge, the object of science & immanent metaphysics, deals with the conventional reality of things, grasped in empirico-formal statements of fact (called "apprehensions") considered by all concerned sign-interpreters to be true, even if this only appears to be the case. Invalid conventional knowledge or common falsehood, while quite rampant, is not considered here. The obstinate determination, tenacity or degree of abidance characterizing the dreamlike mirage of appearances backs conventional truth. The latter manifests in science as facts we can hold for true and in immanent metaphysics as valid speculations about the totality of what convention considers to exists. The major immanent leaps to consider here are existence itself, the cosmos, life & consciousness, i.e. answers to the questions : Why something rather than nothing ? Why cosmos ? Why life ? Why sentience ? Besides seeking ultimate truth or the ultimate status of phenomena, preparing the transcendence of conceptual thought by ending reification, thus revealing the potential suchness of the mind, immanent metaphysics, when invalid, signals our ability to cover up our inborn cognitive limitations by brontosauric theories on substance. Reifying, substantializing and so turning ideas into ultimate things or self-sufficient grounds, such transcendent ontologies forget the limitations of conceptual cognition and invalidate their position by not taking reason and science as their guide. In doing so, they do not even accommodate important relative truths, like the influence of ontological illusion on knowledge in epistemology. The extremes of reification designate an absolute object (like in theism) and/or an absolute subject (a metaphysics of an "immortal soul", as in Vedânta). This grand story on the substance of the soul (the "âtman") accommodates a return to a static concept of the Divine, contradicting ultimate analysis. Moreover, such immanent metaphysics are often ill-informed about the objects of science. For example, they mostly do not integrate the special features of large (relativity) and very small objects (quantum). Nor have they grasped the importance of non-linearity (chaos). In practice, illusion (things appearing differently than they are) works. Circumstances, people, things, sensations, thoughts, feelings, volitions and conscious meaning appear solid, unchanging and graspable as "realities" which either "exist out there" or as "idealities" designated as part of the mind "in here". But under ultimate analysis, their material, informational and sentient (conscious) operators are compounds or aggregates (of actual occasion) changing constantly. Nowhere can a stable, unified continuum be identified. Appearances seem independent existences, but under ultimate analysis this can nowhere be found. What seems a substance is always a process ... So, could we be tempted to claim that the "substance" of reality, its ultimate truth, is lack of substantiality ? Describing the ultimate nature of phenomena as unsubstantial is attributing a positive, conceptual contents to the ultimate, characterizing the nondual as without anything, suggestive of a void or absolute nothingness. This leads to nihilism. To the extent we say phenomena are unsubstantial, our scientific & immanent metaphysical knowledge is relative. From the point of view of ultimate truth, there are no phenomena to be called "unsubstantial". Nothing can be said about the ultimate nature of phenomena. Neverthesless, both the direct, nondual cognition & the experience of full-emptiness, the simultaneity of absence of substance and presence of interdependence, i.e. the suchness/thatness of all things, are indeed possible. Conventional appearances do not reveal the ultimate nature of phenomena. They conjure a dreamlike, echolike world of functional interdependences. Upon these, the deluded mind projects (imputes, posits, attributes) the limit-concepts of reality and/or ideality, turning facts into real things (or physical objects) and thoughts into real ideals (attended by a substantial self). These substantial things only seem stable, for ultimate analysis shows they are not. For example, geological formations seem solid, continuous, lasting & permanent, but they are not. What then to think of the so-called lasting qualities of direct sensate & mental objects in general and our sense of selfhood in particular ? All are compounds and so impermanent. Insofar as conventional truth is concerned, the tenacity of functional interdependence, -expressed as the regularity of Nature-, is valid. Its degree of abidance obvious. Appearances exist functionally and conventional existence is a fact. Things exist conventionally, there is something rather than nothing. Objects exist as imputed by the mind, but -in the case no minds are present- exist as resulting from fleeting determinations & conditions. There is not a single atom in existence determining its own ground ! All phenomena are other-powered. Nihilism is refuted by accepting there is a "base of designation" which, existing interdependently in Nature, is extra-mental. In epistemology, this acceptance is a norm necessary to be able to think the possibility of knowledge, but is not something "found", otherwise ontological realism would ground knowledge, leading to scandalous contradictions. Staying within the boundaries of conceptual thought, i.e. the pre-rational, proto-rational, formal, critical and creative modes of cognition, valid immanent metaphysics mostly serves relative, conventional truth. From epistemology, it receives the limit-concepts & conditions necessary to be able to conceptualize the two sides of its concordia discors, namely the parts played by object & subject. From science, it gets the parameters to speculate about the reality of existence as a whole, about the cosmos, the emergence of life and the miracle of consciousness. Hence, metaphysics has two faces. One is turned to conceptual thought and works out an immanent perspective on what is, the other to the ultimate suchness of all things, approaching this by way of nondual, non-conceptual cognitive apprehensions. Confusing this distinction, and addressing the ultimate by way of concepts is the path of falsehood in transcendent metaphysics, while the path of truth regarding suchness/thatness is the wisdom-mind directly realizing the full-emptiness of all phenomena, i.e. the union of a universal lack of substance and an the all-comprehensive interdependence between all things. § 8 Objective & Subjective Immanent Metaphysics. Objectively, as a heuristic, or a general, common sense formulation guiding investigations, valid immanent metaphysics inspires science. It does so by offering a "grand story" about the world and expounds a thematic itinerary of sorts. Answering the question : "Why something rather than nothing ?", two extremes are avoided : being is not posited as eternal, continuous, autarchic, unchanging, substantial or essential, i.e. as non-referential. This is the (Platonic) fallacy of eternalism. Neither is the possibility of ultimate truth denied and fundamental "Dasein", or nature of mind, reduced to mere "Sosein", or the "truths" of the worldly continuum of valid but mistaken interdependent phenomenal aggregates. This is the fallacy of nihilism, in vain avoiding transcendent ontology. While there is no substance, there is some thing. Conventional existence is not denied. Things appear to exist as spatio-temporal, intersubjective formations with their functions, conditions & determinations. Absolute existence is not denied. The ultimate nature of phenomena is not what appears, and this negation is absolute & non-affirming, i.e. negating the realm of appearing phenomena as a whole (while relative negations always affirm something else, as "not-male" implies "female" and "not-evil" implies "good"). The speculative study of functional interdependence calls for the origin of the cosmos, the beginning of life and the meaning of human life. This order is imperative. After affirming there is something rather than nothing, the actuality, nature and meaning of this something is at hand. For anything to be, there must be operators functioning together in a spatio-temporal framework. How did this cosmos we find ourselves in happen ? Next we reason, that for anything to be alive, the cosmos must cause growth & gestation. How is life possible ? For anything to be human, culture must be present. What about consciousness & meaning ? Subjectively, valid immanent metaphysics invokes the object-possessor, and its various sensate & mental objects, speculating about the human mind, freedom, liberty, solidarity, democracy, spirituality, etc. This gives way to vast domains : consciousness, thought, feeling, action & sensation. The conventional, speculative "truth" of immanent metaphysics is true in a provisional sense only. It is valid insofar as its arguments are clear, sound and convincing. So immanent metaphysics literally "stands next" to science ("physics"). It speculates in terms of totalized panoramas, incorporating crucial theories belonging to both physical and human sciences. These are intended to inspire the inventivity and creativity of scientists, advancing discovery and expanding our knowledge-horizon. Immanent metaphysics, insofar as the arguments backing its speculations are warranted by empirico-formal statements of fact, is therefore the ally of science. Insofar conceptual thought remains substantialist, cherishing invalid forms of immanent metaphysics, like ontological realism and/or ontological idealism, conventional truth is reduced to delusional opinions and conventional falsehoods. This involves the perversion of reason (cf. Kant's "perversa ratio"). § 9 The Itinerary of Ontology. • conventional, immanent ontology : speculative totalization of (a) the sensate conditions involving space & time and the forces operating between material, physical actual occasions (particles, waves & fields), (b) the information, formal conditions or architectures pertaining to actual occasions & (c) the meaningful symbolizations of conscious entities ; • ultimate logic : given the immanent sphere of sensation & mentation, as well as the totality of all realities & idealities, both sensate and mental objects are analyzed to discover whether they truly exist as they appear, i.e. as substances from their own side. As these cannot be found anywhere, one cannot posit objects to possess an inherent, essential existence ; • absolute, transcendent ontology : beyond the conventional sphere, conceptual symbolization stops, and a gap, abyss, isthmus or "jump" is suggested. Direct, nondual, non-conceptual intuitive cognition is ineffable, has no mental residue and is one with "great compassion" ("mahâkarunâ"). According to the ultimate logic acting as an approximate ultimate to wisdom-mind, refuting all affirmative, katapathic statements about suchness/thatness, nothing substantial can be said about this pinnacle of human cognition, cultivated in meditation, and unveiled in grand spiritual poetry. Wisdom is a direct encounter with the luminous singularity of the mind itself, with its own ever-enlightened nature. To arrive at this speculative totalization, ontology needs a first principle. Monist logics privilege a single principle or monad. Materialism & spiritualism are historical examples. The former understands matter as the self-sufficient ground of the edifice, while the latter posits spirit as the principal. The advantage of monism is its unity. The system of ontology is erected upon a single ground, and so one does not need to explain any ontological differences between entities, for there are none. On the most fundamental level of reality, all phenomena share the same nature. Logically, such a solution automatically accommodates simplicity and the ideal of finding a single principle explaining the unity of science. A multiplication of founding principles is absent, allowing us to grasp the manifold with a single concept. Materialism argues physicality to be this concept. Several reasons can be advanced. As Aristotle already remarked, "substance is thought to be present most obviously in bodies" (Metaphysics, VII, ii.1, my italics). If this is considered correct, then physicality must come first and so be promoted to the status of founding monad. Also Kant privileged the senses, rejecting intellectual perception as not belonging to most men. By doing so, the impact of stimuli on the sensitive areas of our sense organs is given a higher ontological status than mental objects, deemed to be derived from the former. Sense data are turned into the rock-bottom of science. It eludes these thinkers knowledge cannot be divorced from conscious apprehension, i.e. one cannot observe any object without an observer, and the latter does more than merely passively register the incoming sensuous flux, but co-determines it. Indeed, all observation happens in a framework of theoretical connotations at work from the side of the subject or subjects of knowledge in the act of observation. For alternative reasons, spiritualism thinks consciousness to be the first concept. Hegelianism is a modern, dynamical version of Platonism & Spinozism. Both fail to plunge deep and discover a more fundamental level. Criticism leaves these solutions stand naked (cf. A Philosophy of the Mind and Its Brain, 2009). Non-monists logics always introduce more than one fundamental ontological principle (a duality, triplicity, quaternio, etc.). Duality, with its powerful reflective capacities, introduces otherness. This is a first step outside the monadic & monarchic continuum, adding radical alteriority as a new unity. But herein lies the weakness of dual systems, for now two principles are generated. How to reconcile their ontological difference in a single Nature ? If the ontological difference cannot be reduced to a more fundamental stratum, then the variety of fundamental ontological principles will cause ontology to miss unity, making it unclear how these two or more principles have to be thought together without breaking up the world in as many pieces as there are principles. Of course one may single out one principle and consider the others as merely illusions or dependent of the former, however not to the point of being included by it. Platonism is such a solution. The world is divided in two ("chorismos") without giving the same ontological & epistemological importance to these two divisions. The World of Becoming, due its variety, multiplicity and change, is not rejected, but merely made dependent of the World of Ideas. So although apparently dualistic, Plato's solution is a monism in disguise. Building on Platonic ontology, the most influential ontological dualism of recent times was introduced by Descartes. But a radical difference must be noted. Plato considered the world of becoming as a "shadow" of the world of ideas, a paradigm for the singular things participating in it ("methexis"). For him, becoming participates in Being, and only Being has reality. Descartes introduces three different substances, each with its own distinctness leading up to a substantial difference : the ego cogitans, extension (matter) & God. The Greek depreciation of matter is gone. As God is transcendent, mind & matter are the fundamental substances of the world. Precisely because Descartes defined these two in terms of substance, implying objects endure from their own side, independent & separate from other objects, a pivotal problem rose. How can two ontologically different substances, sharing no common ground (except God), work together ? Handicapped by this ontological dualism, Cartesianism was not able to deal with this, leading (after the échec of German Idealism), to a reduction of mind to matter, and a physicalist understanding of consciousness. Returning to the elegance of monism, and rejecting both materialist (physicalist) and spiritualist essentialism, let us ask : What is the fundamental concept bringing all phenomena under unity ? Reject substantialism or essentialism, for can a single mental or physical substance be posited, i.e. an "self-powered", autarchic object existing from its own side, independent & separate from all other objects, one existing inherently ? The rejection of essentialism is the acceptance of the premise of process thought : there are no substances, there is no "substance of substances", and so all phenomena are "in process", i.e. ever-changing, impermanent and interdependent happenings (occasions not independent nor separate from other occasions). Moreover, "phenomena" are actual (not past, nor future) happenings hic et nunc. There is no "world" behind the "world", no "Jenseits". Process thinking focuses on the things in their actuality. Thinking process & actuality begs the question of the unit or standard of process ? Before describing processes, their arisings, abidings & ceasings, as well as their efficient and final determinations, we have to arrest the first concept of this process-based monism, the ontological principal. Processes (P) go the way of actual happenings, concrete actual occasions (o1, o2, ... o m). Every existing object x or x is characterized by a set of actual occasions O = {ox1, ... oxm} making x unique. This set constitutes the actual continuum of x. Everything outside the occasion-horizon of this continuum does not constitute x. Can we do more than accept actual occasion ox as a logical primitive, a given ? Following Whitehead (1861 - 1947) and his "quantum ontology" (Process & Reality, 1929) : (a) actual occasion o x, an instance of the set of actual occasions O = {o1, ... om}, is an atomic & momentary actuality characterized by "extensiveness" ; event e x, an instance of the set of events E = {e1, ... en}, is the togetherness of actual occasions, and entity en x, an instance of the set of entities = {en1, ... enp}, is the togetherness of events, while "entity" or "object" are synonymous. Extensiveness is what all actual occasions have in common. This extensive plenum of the actual continuum of each actual occasion is : (a) spatial : as in the case of geometrical objects ; (b) temporal : as in the case of the duration of mental objects ; (c) spatio-temporal : as in the case of the endurance of sensate objects. Entities and events are actual occasions interrelated in a determining way in one extensive continuum, and an actual occasion is a limiting type of an event with only one member. Nature is built up of these actual occasions. Events are aggregates or compounds of actual occasions. Entities are aggregates or compounds of events. When an aggregate or compound forms a society, a higher-order self-determination is at hand, a marker to distinguish non-individualized & individualized aggregates (or societies). Monism coupled with essentialism has difficulty explaining the manifold, its multiplicity, variety, differentiation, complexity, richness & interconnectedness. This approach cherishes a single static factor. So certain aspects of the manifold (of Nature) cannot be explained. The reason is clear : no substances are found to exist. The combination fails because absolute autarchy & self-determination cannot be successfully argued. Thinking a single dynamic factor solves many of the problems. In the West, process-monism is rather recent. We find traces of it in Greek philosophy (Heraclites) and a first draft in Leibniz. Elaborated by Whitehead, Process Philosophy emerged. § 10 The World-Continuum or Word-System. Classical Occasionalism, first propounded by the tenth-century Muslim thinker al-Ash'are and found in the writings of Cartesians Johannes Clauberg (1622 - 1665), Arnold Geulincx (1624 - 1669) and Nicolas Malebranche (1638 - 1715), rejects the idea substances entertain any kind of relation. This is affirmed by Nâgârjuna in his A Fundamental Treatise on the Middle Way (Mûlamadhyamakakârikâ, 2th CE, chapter XIV), in terms of an analysis of "connection" ("phrad-pa"), denoting the relation between components in any compounded phenomenon as non-substantial, but also the relation among their conditions & determination compounding them as non-substantial. This points to the absence of reification at any level of ontological analysis. Even the functionality of the efficient determinations characterizing phenomena, their location in a causal and mereological nexus, defining the logical properties of the relation of part and whole, are not permanent, autarchic and existing from their own side. Of course Classical Occasionalism had another agenda. Using the Cartesian substances "matter", "mind" & "God", it elaborated upon the consequences of ontological dualism, claiming finite things can have no efficient causality of their own. Substances cannot be the efficient causes of events. In ontological monism, the question how two or more substances relate is a non-issue, for only one substance prevails. But as soon as the numerical singularity of the fundamental principle (the monad) is relinquished for dualism, thinking change and interrelatedness brings on the question how different kind of things relate ? Classical Occasionalism rejects the possibility of any kind of relation whatsoever. Different substances can a priori never bridge their natures. All physical & mental phenomena are merely "occasions" or happenings on their own, devoid of any interconnectedness and efficient power, utterly incapable of changing themselves. Physical "stuff" cannot act as cause of other physical "stuff", for no necessary connection can be observed between physical causes and their physical effects (a view returning in the writings of David Hume, for whom causality and other lawful determinations are merely psychological habits). Moreover, because mind and brain are so utterly different, the one cannot affect the other. Hence, a person's mind cannot be the true cause of his hand's moving. The mental cannot cause the physical and vice versa. Ergo, as events do exist, they must be caused directly by God Himself. For what God wills has to be taken to be necessary. So far this remarkable view. Let us take onboard the idea substances cannot relate to each other. It would seem then, one should interpret the view substances do not exist as affirming all phenomena are interdependent processes. The conditions and determinations defining this interdependence or universal togetherness of all possible actual occasions are themselves co-existent with this stream of actual occasions making up what exists hic et nunc. They do not exist "outside" this dynamical streams of actual occasions, forming aggregates and societies of actual occasions, events and entities. Like a swimmer, they are adaptive archetypes, intelligently altering their format while performing with style, preventing their momentum from drowning (dying out). An actual occasion is an atomic & momentary actuality characterized by "extensiveness". Although indivisible, an actual occasion is not a "little thing", but a meaningful (creative) momentary differential change "dt", explained in terms of efficient & final determinations. These act as the two state-vectors of all changes in all the processes involving all actual occasions conserved in the interval or isthmus "dt" of the present moment of the world. The structural analysis of actual occasions does not reflect a temporal sequence, for the two state-vectors of process are simultaneous. From the past, efficient determinations enters actual occasion x. Because of its iota of self-determination, x makes a choice (a minimal indeterminacy or "clinamen"), and this creativity enters the efficient determinations of the next actual occasion. In this way, a single actual occasion evidences the smallest possible degree of sentience. Aggregates form and these streams are interlinked and reinforced. Recurrent events form entities, each with their own actual continuum-streams, compounding and bonding into societies. At the level of societies, the experience of conscious unity is present, pointing to a higher-order consciousness, as can be seen in the "kingdoms of Nature", the minerals, the plants, the animals and the humans. If merely product-productive, manufacturing the world, the world could not display creative change and state-transformation. But the ongoing enrichment of the world is a fact of science. Negentropic transformation is the outstanding feature of life & consciousness. This creativity must ontologically be accounted for ... Actual occasions, the actual units of process, are Janus-faced : they take from the past and, on the basis of an inner, finative structure, transform states of affairs, paving the way for further processes. They are not merely product-productive, manufacturing things, but also state-transformative. In this way several degrees of togetherness or concrescence can be identified, called events, entities, aggregates and societies. The organic whole of actual occasions, the world-continuum or universal sea of process, extended from the extremely small to the humongous, is both physical and non-physical or mental. Both have distinct properties, consisting of actual occasions defined in efficient & final terms. The physical (the world of matter) is the domain of physical objects characterized by mass & momentum. The non-physical is, on the one hand, the domain of information (the world of embodied & disembodied mental, abstract, theoretical objects) and, on the other hand, the domain of consciousness (the world of the percipient participator endowed with decisive conscious choice and sentient self-determination). These three domains are complex societies of actual occasions. Moreover, the non-physical is not made part or reduced to the physical. The question of the functional role of the mental on the valuation of possible physical outcome, can be posed. Metaphysics no longer arrests downward causation, giving to both the mental & the physical identical weight, but distinct functional roles. "Efficient determination" is physical momentum & mass of the particles, waves, fields and forces at hand. "Final determination" is self-determination, creativity, valuation and the experience of conscious unity, entering efficient causality & producing novelty. Although indivisible, actual occasions are not "little things", but a differential change "dt" explained in terms of efficient & final determination. Couple process with a pluralist view on the distinctness of occasions (not on their ontological difference !) and embrace, in principle, an endless number of distinguishing attributes, aspects or operators (hylic pluralism), reducing these to the three complex societies known to function : matter (hardware), information (software) and consciousness (userware). Regarding the latter, the crucial distinction between consciousness per se (as a domain of the world-continuum) and human conscious experience (or inner life), as a very complex region in that domain, should not be missed. On this planet, the human mind is an extraordinary continuum of occasions, the only one capable of featuring inner life & conscious experience. So the world, or the totality of all observable events taking place in the universe, may be divided in three logical basics or primitives. Each is a complex society of actual occasions or a domain of the world.  Each is also an operator characterized by a function, enabling it to work a set of unique interdependent determinations & conditions, discharging its task in such a way as to make different events work together, form more unified functional wholes and harmonize their dynamic signatures, the universal intent in the Divine mind of the Architect of the World. By collecting well-determined events into a single set, three interacting sets are formed  : • matter or "hardware" (of which all elements are mostly M-events) : the physical space-time continuum, the executive hardware of working, physical compounds, defined by particles, waves, fields & forces ; • information or "software" (of which all elements are mostly I-events) : abstracts, universals, theories, codes, laws, architectures & algorhythms, the legislative software of natural & artificial expert-systems ; • consciousness of "userware" (of which all elements are mostly C-events) : free choice, self-determination, meaning, autostructuration, mentality, the intentional activities of subjectivity & inner life. These unique arrangements or world-domains are characterized by a prevailing type of mathematics, tendency, movement & order : • matter : Real Numbers, dispersive, centrifugal, entropic ; • information : Binary Numbers (1 and 0), integrative, algorhythmic, natural & cultural forms, limited but integrated set of natural & artificial expert-systems ; • consciousness : Complex Numbers, paradoxical, centripetal, negentropic, meaningful, symbolic & sentient. Although functionally stand-alone subsystems, they constantly interact on various levels of expression or functional co-relativity & interdependence. Because they are joined, a super-interactionist model allows to understand the relations, conditions, determinations & modes of communication between all actual occasions, events, entities, aggregates & individualized societies happening in the world : • C interacts with M : sensation & mental states = domain of sentience (awareness of objects). • M interacts with I : algorhythms and imperative codes of command = domain of Nature (evolution) ; • C interacts with I : symbols, science, philosophy, art, creativity = domain of culture. § 11 Functional Co-Relative Interdependence. Functional co-relativity outlaws absolute isolation and points to general interdependence. To define "ousia", substantialism (essentialism) has to defend absolute isolation. The essence ("eidos") of an object must have "own-nature" ("svabhâva"), i.e. some thing permanently existing from its own side, unaffected by the changes in its accidents, whether they be quantities, qualities, relations or modalities. As monads, substances must have no "windows". This entails three logical consequences : substantial objects are static, non-functional and self-referential. Because of these sordid features, they hinder the advancement of science & metaphysics. Substantial objects are static because their substantial core does not change (without changing the object into another object). Unchanging objects cannot relate to other objects, for the idea of relation implies openness to others and so openness to fundamental change. If an object is a self-identical monad, it has no "exits" and so cannot interact with other objects. These objects cannot move, produce or cause. Constant autoduplication ensues. Substantial objects are non-functional because they are isolated. Without any possibility to relate to other objects, they cannot produce efficient action, leading up to a relative impossibility to function. Where can these objects be found ? Except for analytical objects, all apprehended objects are functional. Substantial objects, due to their self-identical, inherent "being", have only themselves as sole referent and so cannot apprehend anything else than the monarchic affirmation of themselves and their self-powered own-nature ("svabhâva"). Their solipsism is however based on nothing else than this affirmation and therefore circular. Where can these objects be found ? All synthetic objects depend on determinations and conditions outside themselves. At the micro-level of physical reality, all objects are interconnected, and at higher levels this is also the case. In natural systems, there is nowhere anything non-referentially "on its own", for all events are part of a complex network of determinations & conditions. In artificial systems, processes may be isolated from their environments (like atomic fission), but this procedure entails lots of work to realize & sustain the quarantine, often with much damage to the environment once back reintroduced (depending on the nuclear waste involved, hundreds of thousands of years of containment are necessary). Interdependence of actual occasions, events, entities, aggregates & societies implies function (or efficient conditions of determination). Two types prevail : 1. determined functions : in a system of general determinism, events are connected through a number of efficient determinations, like self-determination, causation, interaction, mechanical determination, statistical determination, holistic determination, teleological determination & dialectical determination. Events are linked if the conditions defining each category are fulfilled. For example, in the case of causation, it is necessary, in order for an effect to occur, to have an efficient cause and a physical substrate (propagating the effect in spacetime). In contemporary scientific determinism, these determinations are not absolutely certain, but relatively probable, for science is terministic, no longer deterministic ; 2. nondetermined functions : considering the inner, mental structure of actual occasions and their togetherness (concrescence), as well individual actions of persons, cultures and civilization, phenomena are also connected by way of various degrees of free choice, intention, freedom, self-determination, valorisation, creativity and conscious life, both individual as social. This final determination escapes the conditions of the categories of any kind of lawful efficient determination. Indeed, without the possibility to posit nondetermined events moving against the system of efficient determination, ethics is reduced to physics and justice impossible. How is responsible action possible without the actual exercise of a degree of freedom, i.e. the ability to accept or reject a course of action, thereby creating an efficient-wise "indeterminate" influencing agent, changing all co-functional interdependent efficient determinations or interactions by entering them, thus adding negentropy to entropy ? How, without free choice, is genuine creative advance possible ? All actual occasions are characterized by their two state vectors : efficient & final determinations. The former is their physical, outer, overt material activity, determined by particles, waves, fields & forces, the latter their mental, inner, covert sentient activity, determined by creativity, novelty & self-determination. Although a single actual occasion has only an infinitesimal iota of sentience, the fact of its togetherness with countless others, entering them with the result of an infinitesimal mental decision, brings about a cumulative effect, and these successive generations of additions allow -at some point- the emergence of societies, i.e. individualized aggregates endowed with the experience of conscious unity. Although an individual actual occasion has a very small degree of sentience in the form of a "clinamen", it is usually part of aggregates devoid of such experience of conscious unity. In that sense, remembering Leibniz, a crystal in a stone thrown at a cat has more affinity with the cat than the stone. Process thought does not embrace full-fledged panpsychism, for then even the stone would be sentient. As an aggregate of micro-sentient actual occasions, the stone is non-individualized, i.e. does not experience its own unity. Thus, it drowns the micro-sentience of the actual occasions of which it is a mere compound in the non-sentient togetherness of its aggregation. As soon as a single, non-sentient object can be identified, panpsychism can no longer be defended, and indeed, Nature abounds with mere aggregates. Societies (like molecules of crystal or living matter) and complex societies (like humans) are rare. Panexperientialism affirms actual occasions exhibit a (very small) degree of sentience, but denies the togetherness of them -devoid of the conscious experience of their own unity- to be sentient insofar as this concrescence goes. Observing the three domains of the world begs the question of the cosmic genesis. The conclusion these three functions, namely matter, information & consciousness, were present from the Big Bang, albeit in varying degrees, cannot be avoided. Like the unfolding of a flower, the efficient determinations of the material domain came first, fixing the original physical parameters of the cosmos. This first, physical unfoldment set the material ground. But together with this event, resulting from the activity of the final determinations in the original "primordial soup", order and structure emerged. This second, informational unfoldment set the conditions of the architecture of the cosmos. Because of this structure, the cosmos could expand and generate stars, the breeding-ground for the third, sentient unfoldment, bringing about life and consciousness. Only at this level societies emerged. First in the form of crystal molecules and, due to complexification resulting from more efficient interactions, as the first living cells. Billions of years were needed to allow living societies to individualize their sentient component, eventually arising as the experience of conscious unity. Foreshadowed by plants, it exploded in animals and eventually evolved into humans. The root of these three cosmic unfoldments can however be found in the singularity of the primordial actual occasion of our universe : the Big Bang. This Big Bang singularity is a discrete moment in the inconceivable, beginningless & endless cycles of arising, abiding, ceasing and re-emerging worlds out of the world-ground, the possibility of all universa. Hence, speculate everything acquired by countless conscious societies, well-ordered (informed) aggregates and efficient physical systems returns, at the Big Crush (or Big Evaporation) of the present universe, to the original singularity. Not an iota of material, informational and conscious actualities is lost, but contributes to the evolution of the endless process of subsequent world-emergence, abidance and collapse. The new world to come is not a "tabula rasa", but endowed with the result of what happened in the one before. Eventually, at the point at infinite infinity, all possible worlds have evolved out of the world-ground into fully sentient societies, and the "Jubilee of Jubilees" is celebrated for ever and ever. Then, at this point, the eternal recurrent cycle of light-manifestations ("neheh", Atum-Re), the periodic process worlds, joins everlastingness ("djet", Osiris). § 12 The Simultaneity of Relative Appearance & Absolute Reality. Only after repeatedly inviting transcendent wisdom to inspire thought, cleansing the conceptual mind from its reifications, may prolonged ultimate analysis facilitate the opening of the gate to "seeing" the ultimate, absolute nature of all possible phenomena, their suchness/thatness or ultimate reality as it is. Ultimate analysis merely assists the conceptual mind to directly recognize the nondual truth in terms of a non-affirmative negation. Immanence is not a ladder from conceptuality to non-conceptuality, from the relative truth of conceptual thought, to the ultimate truth of naked, non-conceptual, nondual cognition. Immanence only offers a threshold, an approximation, a generic idea encompassing the emptiness of the world as a whole. Indeed, a direct, naked state of cognition cannot be caused. The itinerary is not a certainty, but the preparation will certainly be welcome to sustain the awareness after it spontaneously dawns. Indeed, if the conceptual mind has not been thoroughly purified, reification will recur. Ontology based on confused cognition is the screen upon which the tragi-comical illusions of realism & idealism are projected and made to play. But although conventional reality does not appear as it truly is, being like an illusion, it "is", in an ontological sense, not identical with illusion. Appearing like an illusion is not the same as being an illusion. A saint may dress as a dirty pauper. The pauper is like the illusion, for he appears not as he truly is. Whatever appearance the saint chooses, s/he remains sacred. Conventional truth (the relative nature of phenomena) is how ultimate truth (the ultimate nature of phenomena) appears. So the ultimate exists conventionally. All phenomena can be simultaneously experienced as devoid of substantiality and at the same time as functional, interconnected and mutually dependent. Knowing the ultimate does not cause "another" world to suddenly appear. Awareness of suchness/thatness is being conscious of the full-emptiness of each and every phenomenon (its emptiness and universal connectedness). The difference is therefore epistemic, i.e. intra-mental. Directly perceiving this-or-that ultimate nature of conventional appearance, this-or-that actual absence of substance hic et nunc and this in the fullness of interdependence or, on the contrary, only experiencing appearances, merely depends on the discovery of the nature of mind, the fundamental dimension of the cognitive apparatus. As long as the nature of mind remains undiscovered or obscured, conceptual thoughts overlay it and mental designations are reified, producing "objects" such as the idea of a self-powered physical body, a substantial mind and a solid, separate self. These further cover the nature of mind, bringing emotional afflictions, sickness, an unhappy old age and an unwholesome death. Ultimate truth, as approximated by the logic of ultimate analysis, the pinnacle of conventional ontological truth, clarifies all phenomena to be full-empty, i.e. full of functional interdependences but empty of inhering, intrinsic, substantial, non-referential, essential qualities, characteristics, natures, etc. Full-emptiness contradicts substantial existence, but not functional interdependence. "Full-emptiness" translates the unity of emptiness & interdependence. Ultimate truth as given by direct, nondual experience, makes us "see" how all possible phenomena, while devoid of substantial essence, are interdependent "displays" or the "sport" of brilliance of the ground-luminosity, the ultimate base of all, the world-ground. Whether any ontological exercise, the present included, exceeds the limitations of creative thought, cannot be conceptually established. § 13 Transcendental Philosophy and Nâgârjuna. Transcendental philosophy (Criticism) aims at the process of the synthesis of phenomena rather than on a supposed sufficient ground underlying them. Precritical epistemology based the possibility of knowledge on this "Ding-an-sich" (Kant), called "noumenon", thing in itself or absolute (ultimate) ground of phenomena. Criticism ends this. Indeed, the object of science is not a pre-epistemic ultimate Real-Ideal (the unity of absolute reality and absolute ideality), and so does not depend on a self-sufficient ground preceding cognition, but exclusively on the interconnectedness between actual occasions and their modes of togetherness. These are dynamical architectures, various styles of coordinated movements or dances, artistic displays of various degrees of order (negentropy), i.e. unfolding, showcasing & folding things. They are only relative to movement, to process and result from a universal and necessary mode of connection between phenomena. This denotes objectivity, not the "Being" of some absolute thing like a Real or an Ideal before and outside knowledge. An Archimedean ground is nowhere found. Indeed, something is objective if it holds true for any active subject of knowledge, not because it denotes intrinsic, inherent properties of entities supposed to be independent, separate and so autonomous. This is the leading idea of the transcendental reflection on the conditions of the known, of knowledge and of the knower. Science is therefore not the revealer of a pre-existent underlying self-sufficient ground or "hypokeimenon". Epistemology is not the rooting of the possibility of knowledge in something before knowledge. The Real-Ideal is not the object of science. But neither is science random. Indeed, merely conventional, science is a temporarily stable but ever moving product of the process-bound reciprocal relation between the subject and the object of valid empirico-formal knowledge. Kant's Critique of Pure Reason still has residual foundationalist streaks. Although defined as a noumenon, the absolute ground lies across the knower. This indirect relation is to be differentiated from the direct stream of perceptions on the side of the knower. The latter arise in a subject only crosswise affected by the thing it itself ! One cannot say this contact with the absolute causes the direct perceptions recorded by the knower, for causality happens during categorial synthesis, two steps later. This transversal relation between the knower and the absolute is a residue of the substantialist tradition seeking an self-sufficient ground (before knowledge). This is Kant's Achilles'' Heel, but it can & should be removed from transcendental philosophy. Indeed, this remnant of substantial dualism between the knower and the absolute has been eliminated by neo-Kantianism. It promoted an immanentist and relational transcendental philosophy of science. Objects do not bear intrinsic properties, but result from interdependence, relations and interconnectedness. They are process-based instead of substance-based. There is no ground or pregiven, pre-existent and pre-organized absolute "substance of substances". Moreover, the static framework developed by Kant has been replaced by dynamical a priori forms and their plurality. The highly abstract view of Kant made way for the study of the pragmatics of the game of "true" knowing. The reciprocity between the knower and the known is pivotal here. Interlocked, but cherishing different interests & outlooks, they continuously engage in a concordia discors. This view on science is antifoundationalist, immanentist & relationalist. Science providing the best conventional knowledge ever. In the Critique, Kant wanted a philosophy as universal & necessary as Newton's law of gravity. His aim was not soteriological. In his Mûlamadhyamakakârikâ, Nâgârjuna aims at a wisdom ("prajñâ") realizing the ultimate truth ("paramârtha") of all phenomena. Not because this satisfies philosophical or intellectual pursuits, but because such realization liberates sentient beings, awaking them to the nature of their mind. In this foundational treatise of the Middle Way School (Mâdhyamaka), he presents this wisdom in accord with the profound and refined rationalism of Buddhist logicians, philosophers and scholars. Nâgârjuna's exclusive quest was to free all sentient beings from reified conventional truth ("samvriti"). Take away the reification and the absolute dawns. But the latter is indeterminate and non-accessible to the conceptual mode of cognition. The possibility to directly experience the ultimate nature is however not denied. Contrary to Kant, Nâgârjuna and the Buddhadharma at large accepts (a) meta-rationality (the nondual mode of cognition) and (b) the possibility of directly cognizing the absolute. This is realizing the wisdom of the enlightened ones. Hence, his work is foremost soteriological. Keeping this in mind, let us discuss Mâdhyamaka (Nâgârjuna, Âryadeva, Candrakîrti, Shântideva) in the light of a few remarkable parallels with transcendental philosophy. For different reason, both Nâgârjuna and Kant attack all possible substance-thinking. Kant defined the noumenon as a limit-concept, only pointing obliquely towards our sensibility and thus of negative use only. But he also maintained a quasi-causal, transversal (indirect) relationship between the thing in itself and the knower, leading to inner inconsistencies. Later neo-Kantians considered the thing in itself as nothing beyond the brute fact of its givenness, of it not being produced by a deliberate act originating in the subject. Criticism goes a step further, replacing the description of the cognitive act with a normative system of conditions producing valid knowledge. One must consider facts to represent the absolute, but this may well be mistaken ! This normative move evaporates the residual substantialism and brings to the fore a few interesting similarities between transcendental philosophy, the epistemology of science, and Nâgârjuna, the founder of the Middle Way school. Nâgârjuna's analysis is immanentist throughout. Like Kant, he insists the world should not be construed as a single absolute entity of which something can be predicated. It is like an indefinite series of flickerings, much like the flame of a butter lamp. Moreover, conventional knowledge is empty of any relation with a solid, substantial and inherently existing objectivity. Objectivity is not a pre-epistemic substantial ground. Conventional knowledge has no access to the thing in itself, the supposed absolute or ultimate nature of all phenomena. To discover all phenomena are empty of their substantial core is to realize the universal, lawlike, reciprocal relativity of co-dependent consecutive actual entities. The ongoing display is one of creative advance, with entities entering each other's togetherness. Conceptual reason does not discover the absolute nature of phenomena, but reveals the arising, abiding & ceasing nature of all relative events. For Nâgârjuna, science is an exceptionally efficient and valid conventional truth, but also extremely liable to reification and so delusion. Kant too points to the danger of turning ideas of reason into substances "out there". Certain subjective rules are mistaken for objective determinations of the things in themselves (cf. his "transcendental illusion"). This cannot be taken away, only revealed through criticism. Like all conventional knowledge, science tends towards superimposing inherent, substantial existence upon process-based, nonsubstantial actual entities. It tries to fixate the fluid & transient. We cannot help seeing the world as if inherently possessing certain determinations. With respect to our conventional experience, it always remains the case as if ("als ob") subjective rules are an intrinsic feature of the world ... Conventional knowledge is valid but always mistaken ! Indeed, if the observer partakes in the network of relations producing conventional knowledge, things appear to him or her as if well-defined nonrelational determinations (inherent properties) arise from any measuring interaction. Relative to the observer, well-defined features appear as something substantial. This reification is however an illusion, for it makes things appear as something different than what they are. They appear, while they are processes, as substances ! "Your position is that, when one perceives Emptiness as the fact of relativity, Emptiness of relativity does not preclude The viability of activity. Whereas when one perceives the opposite, Action is impossible in emptiness, Emptiness is lost during activity ; One falls into anxiety's abyss." Tsongkhapa : The Short Essence of True Eloquence. Criticism seeks a higher-order solution to the tensions between science, critical metaphysics and a nondogmatic soteriology like the one proposed in the Buddhadharma. Transcendental philosophy and the Middle Way provide lots of arguments backing the empty, dependent, impermanent and nonsubstantial nature of what is. While transcendental philosophy identifies the detailed mechanism of reification, the Middle Way wants to dispel them once and for all. To link critical thought with this intent, is to open reason for the meta-rationality of cognition which is precisely the aim of critical metaphysics. It should be remarked Kant sought a transcendental philosophy as "solid" as Newton's physics. The latter portrayed absolute properties and substantial material objects existing from their own side. In the most cherished Copenhagen interpretation of quantum mechanics this is no longer the case. Quite on the contrary. The historical continuity with classical physics has been broken. A holistic definition of phenomena is at hand. The object can no longer be dissociated from the contribution of the irreversible functioning of the measuring apparatus. The Hilbert space structure used in quantum mechanics conveys the relational nature of our knowledge about the physical, while involving no description of the two relata. Moreover, the extensive use of differential calculus (even in classical physics), shows only (infinitesimal) relations are accessible. No substantial, monadic ground of these is implied. There are no absolutized relata. Indeed, quantum mechanics points to our knowledge as "relational", with neither prius nor posterius between object & subject. Other interpretations, like the "hidden variable" hypothesis, are desperate attempts at restoring substantialism in physics. As Nâgârjuna remarks : neither connection, nor connected nor connector inherently exist. The existence of relations to the detriment of the relata would imply the use of an opposition (relation/relata) and the reification of one of its terms, while the two terms arise in dependence. Object and subject are on the same footing, there is a nonpolar conception of relations between them and so reification of any is avoided. Relations are determined by certain connections of things and this dependent on the way an observer takes cognizance of the observed system ... The present text, inspired by the traditional classification of topics, is divided into two parts, called "General Metaphysics" and "Metaphysics of Specifics". The First Part, General Metaphysics, explains metaphysics in general and ontology in particular, laying the groundwork (chapter 1) and attending the necessary requisites for any metaphysical inquiry (chapter 2). After having clarified the conventional nature of immanent metaphysics (chapter 3) and defined the limitations of speculative thought in terms of creative thinking (chapter 4), the mind is prepared for ultimate truth (chapter 5) and, to ascertain the lack of inherent selfhood and lack of inherent phenomena, ultimate logic is developed (chapter 6). Finally, the general features of the world are derived (chapter 7), ending the introduction to General Metaphysics. In the Second Part, or Metaphysics of Specifics, and this within the framework of the proposed ontological scheme, particular questions are answered. These bring to bare metaphysical cosmology (chapter 8), metaphysical cybernetics (chapter 9), metaphysical biology (chapter 10), metaphysical anthropology (chapter 11), metaphysical mysticism (chapter 12) & metaphysical theology (chapter 13). Within these broad divisions into parts and chapters, the text subdivides into paragraphs. Each paragraph is composed of units identified -in praise of Aristotle- by Greek letters. At times, a unit is Janus-faced, composed of an object-dependent and an imaginal side (the latter starting with "∫"). The former is elaborate, the latter aphoristic, iconic, laconic and ironic. This bi-polarity satisfies the conditions imposed by the chosen style. At the end of every paragraph, a "Lemma" is advanced. In formal logic, this is a subsidiary proposition assumed to be true in order to prove another proposition. Here, it is a short summary of the salient, outstanding points assisting further development. Part I : General Metaphysics. Thomas Aquinas, following the three divisions set by Aristotle, divided the study of "sapientia" or "wisdom" into "metaphysica" (being as being), "prima philosophia" (first principles) and "theologia". This scheme remained intact until early modern times (1500 - 1800 CE). Christian Wolff replaced it by dividing metaphysics into general and special metaphysics. General metaphysics or the science of being as being was given the name "ontologia" (a term coined by Rudolf Goclenius in 1613), whereas special metaphysics was divided into rational theology, rational psychology and rational cosmology, i.e. the sciences of God, souls and bodies respectively. The impact of the rise of the new sciences is obvious. The spirit of the Renaissance stimulated philosophers to expand their horizon, incorporating many new topics into metaphysics. However, these superb minds were not yet inclined to first consider -before engaging in speculative activity proper- the natural capacity of the mind and its knowledge-seeking cogitations. The epistemological turn had not yet taken place and intellectuals still entertained a naive theory of knowledge, one positing a direct conceptual access to reality-as-such or ideality-as-such. Bewitched by this ontological illusion (reifying mere concepts), concept-realism was still deemed unproblematic ! Measuring, before entertaining speculation, the natural possibilities of the mind, Kant's "Copernican revolution", besides being the decisive criticism of concept-realism, demarcated science from metaphysics. Although the Sun appears to rise and set, in reality it does not, for it is merely the Earth turning. Objects do not appear as they are. We should have tools to decide whether phenomena are merely appearances or indeed more. Subordinated to epistemology, Kant's "metaphysics of nature" is divided into a general part, namely ontology, and a specific part, namely the physiology of reason. The latter was divided in transcendent (rational theology, rational cosmology) and two "immanent" parts (rational psychology and rational physics). A "natural" metaphysics is one staying close to what is known about Nature, one focusing on the sensate objects gathered by the senses, the mental constructs processing these, well as other mental objects like the self. This clearly distinguishes metaphysical speculation from theology. In the course of the centuries, the meaning of the world "theology" shifted considerably. The main divide being between, on the one hand, the organized world religions (Hinduism and the three "religions of the book") and their revealed dogma's and, on the other hand, an arguable discourse on the Divine in general (cf. Criticosynthesis, 2008, chapter 7) and God in particular. In the present metaphysics of process, the word "God" has been deconstructed. This is indicated by adding an asterisk (*) to it. This points to the fact the traditional characteristics given to the "God of revelation", like creative activity "ex nihilo" & omnipotence, are not endorsed here. Hence, God*, this remarkable metaphysical object, is part of metaphysical theology, a branch of special metaphysics. Kant's division between "immanent" and "transcendent" is to be noted. Divide metaphysics in, on the one hand, immanent speculations on the order of the world and, on the other hand, transcendent speculations about what is supposed to exist beyond the limitations of the world ; the actual infinities transcending the world, the end-points at infinity of an infinite number of infinite series ... Due to the advent of the new sciences, a redefinition of the discipline of philosophy has to be realized. normative philosophy : logic, epistemology, ethics, aesthetics descriptive or theoretical philosophy : metaphysics Speculative activity unbridled by critical epistemology (cf. Criticosynthesis, 2008) is most likely to get out of hand. Then, the natural mind is no longer equipped to cognize in a valid empirico-formal, conceptual way, resulting in the multiplication of entities, blatant logical errors, extreme views (like nihilism or eternalism), uncritical skepsis and many other mental obscurations like a lack of mental pliancy. As part of philosophy, metaphysics is theoretical, i.e. involves a description of the discipline itself (general) and an elucidation of its objects, topics, issues, etc. (specifics). This metaphysics or theoretical philosophy covers all theoretical subjects not dealt with in a normative discourse. History, language, hermeneutics, the cosmos, life, consciousness, God* etc. are possible topics. Coming after the normative disciplines of logic, epistemology, ethics & aesthetics, the descriptive activity of a metaphysics of being or ontology heralds the "end of philosophy". This makes the mere formal disciplines to act as guardians of a descriptive & totalizing speculative intent. These safe-guards highlight the limit-ideas of a metalanguage of principles, norms & maxims, ruling valid knowledge, good actions and beautiful sensations. These rules assist the intent to totalize our understanding of the world and beyond. Emphasizing nearness & distinctness, metaphysics -divided in immanent & transcendent- is given a border to share with science. Science cannot exorcize metaphysics (from its background), nor can metaphysics be validated without adding scientific fact to its arguments. The crucial difference between science & metaphysics being the non-testability of speculative statements. Indeed, whereas the empirico-formal propositions of science (the statements of fact consolidating the core of the current scientific paradigm) are based on both testable & arguable processes, the totalizing speculations of metaphysics are only based on argumentation. But to argue the validity of a speculative totality is an exercise bringing into play all normative aspects of formal reasoning simultaneously. Hence, not only logical & epistemological considerations are at hand, but also ethical & esthetical and these together in a sublime, coordinated and creative dance. Everything needed to perform such a splendid move must be provided, carefully chosen, put in place, rehearsed, etc. This takes decennia. General metaphysics covers some of the conditions of this process. It tries to invoke the spirit of metaphysical inquiry and summon its speculative power ! general metaphysics : general features, ontology special metaphysics : philosophy of language, speculative theology, cosmology, biology & psychology, etc. General metaphysics has two branches, investigating (a) the general features of metaphysical inquiry (cf. first philosophy) and (b) being qua being, i.e. the nature of all possible being or ontology. This kind of speculation is to be viewed as the "summum" of metaphysics. Demarcated from the general characteristics of any metaphysical inquiry & argumentation, ontology, being the most general of metaphysical disciplines, naturally belongs to general metaphysics. Special metaphysics studies specific objects like God* (metaphysical theology), the cosmos (metaphysical cosmology), life (metaphysical biology), human consciousness (metaphysical anthropology), language, history, law, society, politics, economy etc. Chapter 1. Introducing Metaphysics & Ontology. In this first chapter, the general contours of the present critical metaphysics of process arise. Starting with an investigation of the issue of style, i.e. the best way of expressing speculative thought, the fundamental principle of process metaphysics defines the axiomatic base, reflecting a choice for a single principle or monism, grounding the further elaboration of the system. This basic choice is confronted with epistemological criticism, probing for the limitations of all conceptual cognitive activity and confronting these with the speculative, totalizing intent. Rejecting conflictual & reductionistic epistemologies, the polar structure of the cognitive spectrum is affirmed in accord with transcendental logic. Apprehending sensate and mental objects, the subject of experience is an object-possessor. Both types of objects are confirmed and their distinct properties acknowledged. Such distinction does not lead to ontological difference, but merely to ontological distinctness. In order to circumambulate process metaphysics, a few major historical vantage points are discussed and criticized. The core problem being the uncritical reification of object and/or object of experience, turning them into hypostases or realities (idealities) underlying thought. Once this is out of the way, thinking process again reopens the door to science. Then and only then can metaphysics become the ally of valid empirico-formal thought. Making speculation dependent on conventional knowledge and its apprehension of what exists (either in sensate or mental terms), fulfils its Peripatetic role of being a theoretical form of philosophy "next to" the domain of science, as it were fructifying it. By studying the way metaphysics cannot be eliminated from the latter enhances its status as a discipline necessary for the advancement of knowledge, albeit in an uncomfortable fashion. This raises the question of the advancement of metaphysics itself, i.e. its ability to increase its logical, semantic & pragmatic relevance, if not significance. This elucidation of the advancement of metaphysics is aided by the crucial distinction between speculative activity remaining within the boundaries of what is the known world, or immanent metaphysics, and theoretical philosophy leaving these boundaries behind, as in transcendent metaphysics. While the former can be validated, the latter can not. So then how is a valid transcendent metaphysical inquiry possible ? This question leads to a hermeneutics of sublime poetry ... Finally, having established (immanent) metaphysics and its validation by way of argument, the fundamental move favouring monism is applied to the most general of questions : What builds all possible phenomena ? What do all objects have in common ? This calls for an ontological scheme rejecting both materialist & spiritualist metaphysics. Physical objects nor mental objects constitute phenomena. Instead, momentary actuality is introduced as the ontological principal, bringing process metaphysics close to the fundamental realities of both physics and psychology, namely the collapse of the wave-function in quantum mechanics and the reality of moments of consciousness in psychology and anthropology. 1.1 Metaphysics & Science. Because metaphysics is irrefutable in terms of testability, it has been driven out of the domain of science, encompassing all valid empirico-formal statements of fact. This demarcation, once deemed sufficient to eliminate metaphysics, is however problematic. Indeed, every experimental setup and even every valid scientific theory cannot be properly articulated without untestable metaphysical concepts animating its background. Consider post-Kantian criticism of metaphysics, in particular positivism (Comte) and neo-positivism (Carnap). Here we have two radical departures from metaphysics blatantly failing to deliver. In the former, metaphysics belongs to the second stage after theology (the first stage) and before science (the third stage). The supernatural powers described in the first stage are transformed into abstract notions or entities hiding behind empirical phenomena. Both negativisms (of theological or metaphysical entities abolishing sensate objects) are rejected and replaced by the positivism of empirical phenomena. Neo-positivism radicalizes this view. For Carnap, metaphysicians are musicians without musical skills ! Metaphysics cannot convey any cognitive insight but has only emotional appeal, and this in an inadequate way. Hence, as they are not tautological, nor validated by direct (sensory) experience, metaphysical statements are necessarily pointless, merely conglomerates of meaningless strokes or noise. These approaches, haunted by headaches caused by fifteen centuries of Catholic dogma and four centuries of conflicting metaphysical inquiries, forgot the crux of the matter : the distinction between sensate & mental objects cannot be defined on sensate grounds and so must contain a metaphysical element, i.e. one based on mental objects validated by way of argument only. Metaphysics is an unavoidable "vis a tergo" to befriend with caution, for sure, but impossible to rule out, except at scandalous and hence unacceptable costs. And although it cannot be as precise as scientific thinking, speculative activities compete in terms of the soundness of their arguments, coherence with other theories, appeal, fruitfulness, elegance and simplicity. The question is not how to eliminate speculative thought, but how to bridle it in such a way as to speed up the carriage of science. The era of cooperation between both has finally dawned. Moreover, besides assisting science, metaphysics also (and foremost ?) directs the mind to its largest unity, extent & harmony. No doubt, these carry the spring-board to the highest pursuit : the direct experience of ultimate truth. Thus apprehending full-emptiness, one simultaneously cognizes the emptiness of all possible objects and the fullness of the interconnections between all possible things resting in the bosom of Nature. A. Object-Dependent, Imaginal & Perspectivistic Styles. § 1 The Issue of Style.  α. Put in general terms, "style" is the manner in which an issue is addressed, its dynamism of expression. Style is characteristic of a particular subject matter, but also of a person, group of people or historical period. Insofar as texts are concerned, different styles call for different kinds of writing. ∫ People without style disturb. Chattering geese keep the flock, but the eagle flies alone, undisturbed by the horizons of petty existence. Stylistic choices are defined by the way the author wishes to convey meaning. Although ideally not affecting truth & contents of what is communicated (the logico-semantic value), but mostly how language effectively persuades (the rhetorical value), style nevertheless has a direct impact on how information is understood. This implies the latter may conceal the former and this may be part of the intent of the author. ∫ With style differences can be embraced. Without style, Papageno better keeps his lips locked. But how strong is his desire to speak out ! Like Hapi, the baboon of dawn, gross minds only vocalize to communicate. But to catch the glowing breath of the Morning Star, an intense silent gaze suffices. In literary criticism, a fundamental line is drawn between non-fiction and fiction. Creative writing can be found in poetry, fiction books, novels, short stories, plays etc. ∫ To dream in colours is to see what cannot be seen by any eye. To hear trees sing is the privilege of those walking in pure lands. To smell a splendid cuisine while soundly asleep is the art of connoisseurs. To fly or feel the breeze in Morpheus' lap, or to taste the honey of the night is the endearment bestowed by the gods. May all sentient beings dream and lucidly so. Exposing style identifies expository, descriptive, analytical, academic, technical, persuasive and narrative writing. δ.1 Expository writing focuses on a known topic and informs the reader by providing the facts. δ.2 Descriptive writing uses lots of adjective and adverbs to describe things, conveying a mental picture. δ.3 Analytical writing organizes the exposition by way of a stringent logical structure enabling the necessity of the truth-value of what is conveyed to surface. δ.4 Academic writing takes a third person point of view and brings in deductive reasoning supported by facts to allow a clear understanding of the topic to emerge. δ.5 Technical writing elucidates complicated technical information about the issue at hand. δ.6 Persuasive writing provides facts & arguments to promote a view having the ability & power to influence its readers. δ.7 Narrative writing enumerates events that have happened, might happen, or could happen. ε. Philosophy has always adapted its stylistic choices to its audience. Down the ages, a multitude of styles have been used and meshed together. Some philosophers use fictional styles (the poetry of Parmenides, the dialogues of Plato, the meditations of Descartes, the literature of Nietzsche), while others focus on the academic (Aristotle, Thomas Aquinas, Kant), the analytical (Spinoza, Wittgenstein I, Sartre), the descriptive (Heidegger before "die Kehre"), the technical (Russell, Quine) etc. ∫ Philosophers are merely jugglers. Using different styles to formulate two similar utterances makes the reader wonder whether these different styles intend to carry additional meaning. If not, it surely opens the text to meaning-variability and unexpected turns & creativity. ∫ Readers are heroic beings. They climb steep rocks to attain the summit of understanding. Arrived at the top, they witness more and even higher mountains. It cannot be avoided. The infinity of it all makes any attempt to put the world in a box hilarious. Tragedy invokes comedy and laughter in itself forebodes the twilight of creation. Opening one's door to the stranger of novelty is the only solution. Thinking with style necessarily makes one gracious, kind and ... welcoming. Insofar as philosophy is at hand, two major styles emerge : the object-dependent and the imaginal. In the former, the style is derived from objects, leading to academic, analytical, technical and descriptive approaches. In the latter, a deeper sense is conveyed by triggering the reader's imagination, calling for fictional, persuasive and narrative writing. ∫ Stout choices are a sign of intelligence. But on what does any choice truly rest ? Choices have to be made, true, but they are like a patchwork. The pieces are distinct, but not different. If they were not fundamentally so, nothing could bring about anything. In the present text, object-dependent and imaginal styles are combined. The former brings in a logical structure, whereas the latter, taking advantage of the unavoidable incompleteness, inconsistency and ambiguity of any analysis, invites the imaginal function of its readers. Conjecture this combination gives birth to a very particular, rather independent style, one identifying and opening new perspectives. This choice is rooted on neurophilosophy, avoiding hemispheral lateralisation and taking in the advantages of the neuronal bridge between the two sides of the neocortex. ∫ While knowing even the proud mountain ranges eventually crumble, with style we try to dance like flamingos in love ... Most philosophers avoid discussing their style and take it for granted. In doing so, their exercise is limited by the conditions of the manner with which they address their audiences. People are smart and need a proper invitation. Here, two sides are simultaneously at work : a linear, serial, differential, object-dependent ascent and a non-linear, parallel, integrative, imaginal one. Both lead to a certain kind of conclusion helping us to attach our climbing-ropes to a more secure mooring-post, assisting us to reach out for our next base. § 2 Deriving Style from Objects. α. When the mind of the Renaissance, still imbued with a Medieval spiritual mentality, was pressured by the conflicting intent of the Reformation and the Counter-Reformation, it slowly made place for the scientific world view. As a result, philosophy tried to derive its style from objects. Empirists would cherish sensate objects, rationalists mental objects. In doing so, one hoped metaphysics, in particular the address of totality, could be retained without ridicule. Theology, the address of infinity, was deemed without object. In 1666, Jean-Baptiste Colbert, the Prime Minister of King Louis XIV, founding The French Academy of Science, interdicted astronomers to practise astrology. The aim of the Academy -at the forefront of scientific developments in Europe in the 17th and 18th centuries- was to encourage and protect the spirit of French scientific research. This heralded the official end of the Hermetic Postulate :  "that which is Below corresponds to that which is Above, and that which is Above corresponds to that which is Below, to accomplish the miracles of the One Entity." (cf. Tabula Smaragdina, 2002). As a result, all things "occult" were relegated outside the mainstream, turning them into an interest of chamber scientists (like Newton & Goethe). Far gone was the idea Nature was an interconnected pattern, a living tissue of visible and invisible spiritual forces influencing humanity as well as the stars. Instead, the material world became a disparate clockwork of "disjecta membra", a "nature morte" devoid of "telos", "causa finalis" or inner purpose. ∫ When A is rejected, -A need not necessarily be embraced. Of course, silly superstitions are not valid science, but the intent of the words are more important than how things are said. Despite a spiritualist interpretation, the Hermetic Postulate aimed to underline the interconnectedness of all natural phenomena. Today, this metaphysical dream of the Ancients is again emerging in the mathematics & experiments of the new physics, albeit without the "machinery" of the spiritual agents serving the God of Abraham. Does throwing the child out with the bath-water lead to finding the child again ? Rejecting something makes one dependent upon what was rejected. β. Hand in hand with the rise of modern science, four metaphysical ideas became prominent : β.1 objectivism : the objects of science exist independent and isolated from the mind apprehending them "out there". They possess a nature of their own, one having characteristics abiding inherently as their essence, substance or inherent core ; β.2 realism : these independent objects of science existing on their own exert an influence known by the human mind passively registering this and in doing so acquiring knowledge about them ; β.3 universalism : the objective, real knowledge gathered is the same in every part of Nature, i.e. scientific knowledge has closure ; β.4 reductionism : all phenomena of Nature can be reduced to physical objects and their interactions. γ. Insofar as this modern version of science, to be labelled uncritical, materialist and thoroughly European, gained prominence and became the spearhead of the tinkering harnessed by the Industrial Revolution, philosophers either rejected reason (as in the Protest Philosophy of the Romantics) or considered, to avoid the shipwreck of metaphysics, an object-dependent style as the only way out. Enthused by these developments, they even tried to exorcise the core task of speculation : totality & infinity. They tried, but failed. δ. An object-dependent style fosters analytical, academic & technical writing. In doing so it merely copies the itinerary of materialist science and the industrial approach. Analysis does not necessarily call for synthesis. The academia may replace the authoritarian systems of old, safekeeping the dogmatics of the paradigmatic core. The Bellarmine-effect is therefore their greatest foe. Technical writing forgets the underlying first person perspective, concealing it by the illusion of presence, adequacy & efficiency. Modern science is making place for hyper-modernism, a modular & multi-cultural view moving out of the European fold, one embracing Eastern science as well. ∫ The tragedy of exclusivity leads to the negation of totality, to the inflation of details at the expense of a regulating unity. . By itself, object-dependent writing is not problematic, but its exclusive use clearly is. No system can prove its completeness, eliminate all inconsistency and provide absolute predictability. Knowing this, one may still uses a clock, but never without accepting the irreducible margin of error, the principle of indeterminacy of all possible physical objects. ∫ The imperialism of language needs to be abandoned, complementing word with picture, seriality with parallelism, denotation with connotation. ζ. In the 19th century, despite Kant, materialist science and its ill-advised youthful successes continued to gain ground. Misunderstanding the intent of the Copernican Revolution, showing how objects merely appear and so conceal their truth, criticism was not assimilated. Despite his best efforts, his three Critiques were deemed a form of contradictory idealism, feeding the brontosaurus of German Idealism, turned upside down by Marxism. Instead of grasping them for what they are, namely a new understanding of science per se, they were rejected as an incomplete attempt to pour old wine in new bottles. During his lifetime, the titanic, solitary effort of the master of Köningsberg could not be completed. But it is possible to reconstruct his work in such a way as to avoid the inevitable traps he fell for (cf. Criticosynthesis, 2008, chapter 2). In doing so, objectivism, realism & reductionism are unmasked as fatal errors of a "perversa ratio". ∫ Do not think this perverted, sterile rationality to be grave bound. Today it haunts the Western mind as a zombie, draining the life-force out of scientific novelty. A resurrection of the organicism of the spirit of the Renaissance is at hand. If not by choice, then by the tidal wave of dissatisfaction and alienation, both in terms of culture and ecology. When philosophers are the handmaiden of theology, their speculative efforts are limited by the reasons of dogma. But fideism is not a valid ground for conceptual thought. When they become the slaves of materialist science, philosophers trumpet the jubilee of the misunderstanding of phenomena, including philosophy itself. Although metaphysics depends on valid science, it does not depend on a metaphysical view of science, albeit a materialist one. Pleasurably excited by ticking clocks, by the turning of the wheels of the engines of industry or highly complex natural objects like the human brain or the cosmos, it may indeed seem as if physical objects are the "nec plus ultra" of reality and hence speculating about non-physical objects merely pointless noise. Nevertheless, ongoing test & theory always provide antidotes against too much bewilderment. The Newtonian dream has ended. Although the object-dependent style derived from this cannot be rejected, nor can it be used at the expense of other styles, in particular its antidote and complement : the imaginal style. § 3 Imaginal Style. α. Consider the millenarian tradition of the proto-rational sapiental discourses of Kemet, the golden verses of Pythagoras, the "dark" sayings of Heraclites, the fragment of Anaximander, the two ways of Parmenides, the poetry of Xenophanes, the dialogues of Plato or, at the far end of this series, Boethius' De consolation philosophiae and discover the varying impact of the imaginal on philosophical speculation in Antiquity, and this from the start of speculative writing (as in the Pyramid Texts of Unas) until the end of Late Hellenism. Exceptions, such as the vast scholarly corpus of Aristotle and the Enneads of Plotinus are indeed rare, for even Augustine was tempted to exchange a rather academic & argumentative style for a more literary one (as in his Confessions). Of course, authors (like Plato and Boëthius) may choose literary devices like dialogues to convey proper arguments. Philosophy was not yet divorced from the various other topics of high education, as the division of learning in "trivium" & "quadrivium" demonstrates. Indeed, "philosophia" was envisioned as uniting all branches of knowledge, nourishing the Seven Liberal Arts, the "curriculum" of study in both Classical and Medieval times. With the Summa Theologica of Thomas Aquinas, the authority invoked by the Peripatetic tradition culminated. This opened the gates for a flood of genuinely boring, but highly significant, philosophical works in an object-dependent style (Abelard, Duns Scotus, Willem of Ockham, Cusanus). In many ways, the works of Descartes, Locke, Berkeley, Leibniz, Hume & Kant are part of this mentality. ∫ Each time we overestimate the potential of something, we are bound to discover weakness and frailty. Each time we reduce grandeur, we invoke surprise. When both Heaven and Earth are considered beforehand, what can go wrong ? The answer to any query comes along as soon as we are ready with the question. β. An imaginal style is literary, i.e. creative writing of recognized artistic value. It does not try to eliminate connotation to promote denotation. Syntax never supersedes semantics. It may even invite and manipulate ambiguity to indulge in semantic wealth, not avoiding redundancy. The works of Nietzsche are perhaps the best example history has to offer, but Kierkegaard & Heidegger should also be noted. Of course, these are wholesale works of literature, not aphoristic counterpoints. ∫ Object-dependent style depersonalizes. In doing so it objectifies what remains embedded in the subjective. Imagination personalizes. In this way it subjectifies what cannot do without objectivity. The far extreme of the subjective becomes objective. Too much objectivity betrays a subjective intent. Both are not contradictions but complements. γ. Practically speaking, the distinction between an object-dependent style and an imaginal style is not clear-cut. Writers as Fichte, Schelling, Hegel, but also Schopenhauer, Bergson and many others offer a mix. But examples of a strict object-dependent intent do exist. Consider Spinoza's Ethics, Kant's Critique of Pure Reason, Marx's Capital, Wittgenstein's Tractatus Logico-Philosophicus, Sartre's Being and Nothingness, Popper's The Logic of Scientific Discovery, Habermas' Knowledge and Human Interests etc. ∫ Cucumber soup is made out of a cylindrical green fruit related to melons with thin green rind and white flesh eaten as a vegetable. Firstly, if the soup were only that, it would not be soup. Secondly, who, eating cucumber soup, cares about the cucumber if not for its taste ? δ. A neurophilosophical definition (cf. Neurophilosophical Inquiries, 2003/2009) of the imaginal style focuses on the way the neocortex processes information projected on it by the thalamus. Left Hemisphere Right Hemisphere linguistic kinesthetic propositional visual discrete diffuse analytical synthetic verbal visuospatial digital analogical specific features broad features deliberate totalising denotative connotative literal metaphorical δ.1 Only recently has the importance of this division been understood. The neocortex or "human brain", a folded sheet of ca.11 m² with ca. 20 billion neurons, is divided in two hemispheres connected by the "corpus callosum", an axonal bridge continuous with cortical white matter, consisting of ca. 200 million nerve fibers. The right hemisphere is typically non-language subdominant, whereas the left, containing the speech-area's of Broca and Wernicke, is deemed dominant. δ.2 To define the typical left hemisphere as "dominant" because it processes language reveals a prejudice mainly at work in the West. The right hemisphere may indeed be deemed "dominant" over the left in terms of the analysis of geometric & visual space, the perception of depth, distance, direction, shape, orientation, position, perspective & figure-ground, the detection of complex & hidden figures, visual closure, Gestalt-formation, synthesis of the total stimulus configuration from incomplete data, route finding & maze learning, localizing spatial targets, drawing & copying complex figures & constructional tasks. ε. Although in disciplines like logic, epistemology, ethics and aesthetics, the use of imagination is not wanted (cf. Criticosynthesis, 2008), in the context of metaphysics, the advantages of an imaginal style outweighs the precision necessary in the realm of the normative. The totalising intent, aiming at broad features synthesising the general characteristics of all possible phenomena, do call for a more diffuse band. As those parts of the spectrum invisible to the naked eye are also presented, the connotative associations of the semantic field cannot be missed. Hence, to further meaning, metaphor and analogy are indispensable. ∫ Metaphysics is a marriage and in every marriage compromise is at work. If a compromise would only have clear-cut terms, it would not last and nobody would stay married. Of course, without trust, no grey areas can abide ... ζ. Just as Heidegger before him, Derrida understands metaphysics as a philosophy of presence, a logocentrism placing the spoken word at the center. Writing is then a kind of conservation or fixation after words have been spoken. The audience is absent, while in spoken language the sign immediately vanishes to the advantage of the speaker. With his metaphors, Heidegger did not move outside the "clôture" of the metaphysical traditional starting with Plato. His words still try to capture the nature of phenomena in a discourse pretending to be a fixation of what Heidegger "said about things". ζ.1. The conservation of the spoken meaning by written words is deceptive. Logocentrism is a mummification leaving out important elements. Trying to fixate the "heart" of the matter, other vital organs of the actual communication are removed. This spoken word is deemed primordial, and the written word derivative. In all cases, this derivation is a bleak representation of the original intent. So logocentrism fails to deliver. The spoken word is therefore stronger, but also transient. ∫ The spoken word is like eating the soup, it has tone and taste. But the activity is ephemeral. The written word is like reading the recipe, it is dry and tasteless. But it may help to make the soup again. ζ.2. So to tackle the pretence of presence advanced by logocentrism, a thinking of absence is called in. This by considering how one cannot, compared with the spoken word, recuperate the autonomy or exteriority of the written word. Consider these two French words : "différence" and "différance". The first, written correctly, means "difference", while the second, written incorrectly with an "a", sounds, when spoken, exactly the same as the first, but in fact, does not exist and so means nothing ! So the difference between them is only revealed by the text, not by the spoken word. The spoken word is protected from these letter-based manipulations. The text has its own "power" of misrepresentation, i.e. advances meanings not available in the spoken words. Grammatology wants to address this issue, and deliver the tools to identify the false exists given in the text. ζ.3. Metaphysical texts, in whatever style, are deceptive. But one cannot define their illusions from without, as it were observing them from an Archimedean vantage point. Nietzsche tried to do this by first identifying metaphysics as Platonism and then developing an alternative. But by identifying metaphysics as logocentrism, it becomes clear the battle with the illusion of presence in metaphysical texts has to happen in these texts themselves, not from a safe, matinal outside perspective, for such a proposed safe haven is itself logocentric. In other words, it does not exist. ζ.3. Metaphysical systems tend to invoke words transcending the possibilities of conceptual thought. These transgressions are posited as "exits", while they are false doors. These doors exceed the limitations of the system and/or the borders of conceptuality, and these excesses are vain. Next to every text, a "margin" has to be drawn. In this cleared space, the false doors or "transcendent signifiers" are (a) marked by adding an "asterisk" (*) to them, and (b) identified as deceptive ways to provide the system with illusionary openings allowing it to move out of itself and ground its text in something beyond the text, and this while there is only text. In the present critical transcendent metaphysics, the word "God" is replaced by "God*", thus indicating "God" has been deconstructed. In this way, no new term needs to be invented (leading to a mere cosmetic manipulation). The drawback is this : the deconstruction remains somewhat dependent of what is deconstructed. ∫ At some point, after tiresome journeys, every enduring traveller returns home. Then the road can be trodden again at a lighter pace. Eventually, one no longer steps on, but one flies. Then the activity of travelling itself is walked through. No longer moving, all things come to the traveller. η. It is crucial to criticize the way transcendent metaphysics seeks to ground any speculative endeavour in a reified ground outside the system of metaphysics. Distinguishing between immanent & transcendent identifies the major false door of metaphysics, namely introducing non-conceptuality by way of concepts (like "intellectual perception*" or "intuitive knowledge*"). But immanent metaphysics itself is not without logocentrism, i.e. the vain conviction object-dependent writing is able to be a philosophy of presence exceeding the fluidity of the spoken word. Among many other things, like metaphorical elucidation of denotations, an imaginal style will therefore also try to correct this pretence of the text by pointing to the vain constructs of denotation, promoting the autarchy of the text at the expense of the direct but ephemeral experience of the spoken word and introducing void words arising only as a result of logocentric manipulations of letters. ∫ Systems want to protect themselves from their own collapse. But they are not like houses firmly erected on solid ground, but like trees with their roots up in the sky. Seeking where we fail, we become truly strong. Trying to avoid being hurt, one invites putrid wounds. θ. The two proposed styles complement each other. But neither of them holds the promise to eliminate the false doors exceeding the system and put down by the text fixating speculative activity. Insofar as this activity is oral, it cannot deceive in this way. Oral traditions have existed in the past and so one cannot reject this a priori. Maybe this is indeed the best way to preserve an authentic metaphysical intent. But in a literary culture, an imaginal style introduces metaphor to elucidate denotations but also (and foremost) tries to identify the presence suggested by the latter as a fata morgana. In the immanent approach, this happens by identifying the meaningless "letters" introduced by the text. Insofar as metaphysics as a whole is concerned, this takes place as a process of identifying the false exits leading to a positive, katapathic transcendent metaphysics. Such a guard only allows for a non-affirmative negation, a "via negativa" leading to an apophatic view on the transcendent, one underlining the ineffable or un-saying nature of what lies beyond the realm of possible conceptual thought. If anything positive can be said about this beyond, then clearly such letters are, at best, sublime poetry. ∫ The method is not there to avoid problems, but to identify them. Problems are not identified to solve them, but to avoid them. Avoiding problems does not take them out, but gives us the material of humour. Being able to laugh with depth and extend feeds the intellect. Science and metaphysics are not serious things. Nor are they ridiculous. They preoccupy the humble mind dreaming grand stories. We cannot avoid ourselves. Complementing an object-dependent style with an imaginal style serves the purpose of destroying the illusion strictly defined words are able to mimic the procedures of science. Although process metaphysics needs to be logically correct, avoiding contradictions, promoting completeness and attending parsimony, it does so for the purpose of binding words in a way discrete, serial & analytical communication is made possible. Constantly confronting and exchanging this analysis with the imaginal, builds a higher-order semantic metalevel needed to convey totality and parallel communication fostering synthesis. But these stylistic protocols do not take away the more deeper problem of logocentrism, the fact words only appear to convey the spoken word, the living and wealthy reality of direct human communication. In fact, as both styles make use of symbols, they betray truth by allowing false doors to suggest exits to an absolute representation. By showing where these false exists occur, the reader may draw a margin next to the text. The latter is not criticized by trying to remove these false doors, for this is vain. However, in this margin, the metaphysician explains how they "open" and "close" the text to something deemed "outside" it. Moreover, transcendent signifiers at work in the text are identified by adding an asterisk (*) next to the keyhole. These "procedures" are not invoked to "clear" the text from the problem of logocentrism, for this cannot be avoided. But by entering the lion's den and counting his teeth while he roars, we are better equipped to know how we indeed may be ripped apart by grand & majestic words. In a metaphysical system, in particular a metaphysics of process, the crucial critical demarcation lies between speculative activity staying within the confines of conceptuality (in all its modes, i.e. proto-rational, empirico-formal, transcendental & creative) and cognitive activity exceeding these confines (as in non-conceptual, nondual cognition). Transcendent metaphysics is radically distinguished from immanent metaphysics, and this happens within the domain of metaphysics itself. § 4 Creative Unfoldment. α. Historical perspectivism, developed by Nietzsche, promotes the view all ideations (both sensate and mental) take place from particular perspectives. The world is accessed through perception, sensation & reason, and this direct & indirect experience is possible only through one's individual perspective and interpretation. A perspective-free or an interpretation-free objectivity is rejected. Hence, many possible conceptual schemes, or perspectives, determine the judgment of truth or value and no way of seeing the world can be taken as absolutely "true". At the same time, it does not necessarily propose the validity of all perspectives. ∫ This inflation of the subject at the expense of the object leads to less subjective fulfilment & happiness. The more we are preoccupied with our own perspective, the less pliant the mind becomes. The less pliant the mind, the more dissatisfaction with conventional reality. For historical perspectivism, rejecting objectivity, there are no objective evaluations transcending cultural formations or subjective designations. Experience, always originating in the apprehension of sensate or mental objects, is always particular. There can be no objective facts covering absolute reality, no knowledge of the ultimate nature of phenomena, no logical, scientific, ethical or aesthetic absolutes. The constant reassessment of rules in accord with the circumstances of individual perspectives is all what is left over. What we call "truth" is formalized as a whole shaped by integrating different vantage points. This is a conventional truth, a transient intersubjective consensus. From which perspective did historical perspectivism arise ? If all experiences merely depend on individual perspectives, then perspectivism, as a view encompassing all perspectives, escapes the proposed relativity. As self-defeating as radical relativism, historical perspectivism is an exaggeration, an extreme unwarranted by the normative disciplines of transcendental logic, epistemology, ethics & aesthetics, discovering the principles, norms & maxims we must accept to be able to conceptualize cognition, truth, goodness and beauty (cf. Criticosynthesis, 2008, chapters 2, 3 & 5). By connecting factual uncertainty with normative philosophy, rejecting a set of principles, norms & maxims a priori, a major category mistake is made. While facts validating empirico-formal propositions of science are indeed Janus-faced, simultaneously showing theory-dependent & theory-independent facets, the transcendental meta-logic of thought, valid knowledge, good action and sublime art are universal, necessary and a priori. This is not the result of any description (of logic, epistemology, ethics or aesthetics), but merely the outcome of what is necessary to be able to think the possibility of these crucial domains of human intellectual effort. ∫ In all cases, we stay dependent on what is rejected. Either both terms of the equation are eliminated or both are allowed. Perspectivism is correct in identifying subjective vistas, but -in an inflated mode- cannot sustain its own intent without relying on some object. In the absurd extreme, this object is the absoluteness of perspectivism itself. This is merely a contradictio in actu exercito. γ. While conventional truth can only be known in the context of subjective and intersubjective experiences, critical perspectivism challenges the claim there is no absolute truth. Firstly, within the domain of conventional knowledge, a transcendental set of conditions & rules of thought, cognition, conceptuality, truth, goodness and beauty pertain. These form the normative disciplines studied by normative philosophy. These conditions & rules are found or unearthed by reflecting on the conditions of these objects. What is thought ? What is a cognitive act ? What is a concept ? How to validate knowledge ? How to produce valid knowledge ? How to act for the good ? How to fashion beauty ? Secondly, valid knowledge can only be identified if absolute truth regulates this truth-seeking cognitive act in terms of correspondence & consensus, the two ideas regulating reality (experiment) & ideality (intersubjective argumentation) respectively. Moreover, it may be conjectured, the possibility of a direct experience of absolute reality depends on the extend individual perspectives are eliminated. As the concept always involves such a perspective, only conceptual thought is barred from this. Intuitive, nondual cognition is not rejected beforehand. It is non-conceptual and can be prepared by "purifying" the conceptual mind, i.e. thoroughly ending its addiction to the substantial instantiation (of object and/or subject of knowledge). ∫ Normative statements are true in a meta-conventional sense not escaping conventionalism. Valid empirico-formal statements are true in a conventional sense. Absolute truth, the emptiness of all phenomena, can be conceptually approached by way of ultimate analysis. The direct experience of this truth is possible but ineffable. Although object of un-saying, this nondual experience has nevertheless a direct impact on what is done, said and thought. It therefore modifies our experience of the conventional world. Hence, it is not trivial or insignificant, quite on the contrary ! δ. Critical perspectivism accepts the theory-ladenness of observation, and so cherishes the critical distinction between perception & sensation (Criticosynthesis, 2008, chapter 4). Three fundamental perspectives are given clear borders, marked as "for me", "for us" and "as such". The first person perspective belongs to the intimacy of the observer. Nobody shares two identical reference-points. Position & momentum are unique for every point. So is the available information one has, as well as the clarity of one's conscious apprehensions (sentience). The third person perspective is the paradigmatic, shared, transient, conventional, intersubjective view of a community of sign-interpreters. It is valid (working), but mistaken. While efficient, it does misrepresents objects. Viewing them as independent and existing from their own side, it conceals their true, absolute nature or emptiness. δ.1 This absolute truth is not some super-object grounding or underlying objects. It is the ultimate nature of each and every conventional object. Therefore one can only epistemically isolate emptiness, for in every concrete event, the absence of inherent substance is simultaneous (or united) with the interconnected & interdependent nature of all the elements constituting this actual event. δ.2 The ongoing unity of emptiness (absence of essence) and interdependence is called "full-emptiness". ∫ In the measure a second person perspective opens up, fructifies and shares two first person perspectives, it extols the truth, goodness & beauty of personal love. Extremely rare, this love is often replaced by an act of mutual masturbation. When the cuddling is over, the other person is dropped like an empty can to be filled and consumed again and again. ε. An idiom is the style of a particular writer, school or movement. Let critical perspectivism be the adopted idiom of this process metaphysics, encompassing and integrating the rather "technical" methods of object-dependent and imaginal writing. To succeed, the following distinctions and devices are introduced : ε.1 Uttering "grand stories" is finished. This reveals the awareness no independent substance can be identified. Sensate nor mental objects provide us with an inherent own-nature, an essence independent from other objects, self-powered & autarchic. Process-based, phenomena cannot be grounded in a sufficient ground outside conceptual thought. Hence, the fake grandeur of previous ontological schemes is their pretence to conceptually represent the absolute nature of what is, the suchness of all possible phenomena. ε.2 Accepting perspectives, we divide sensate and mental objects, and grasp the events happening on the sensitive areas of our senses as not identical with the thalamic projection on the neocortex. Although sensate objects have a perceptive base, each apprehended object is the product of perception and interpretation (or perspective). Facts are hybrids. On the one hand, they are theory-independent and, so must be think, correspond with absolute reality. On the other hand, they are theory-dependent, arising within the perspectives or theoretical connotations of an inter-subjective community of sign-interpreters. Because conceptual knowledge is validated by way of test & argument only, one cannot eliminate these signs (in the form of ideas, notions, opinions, hypothesis or theories) without invalidating epistemology. But accepting the theory-ladenness of observation does not eliminate facts are always about something extra-mental. While keeping immanent metaphysics distant from transcendent speculations, an absolute perspective is not rejected. Against Plato, this is not a "substance of substances", but a property of every actual object. While impossible to cognize conceptually, this absolute nature of all phenomena is not a priori deemed outside the realm of the cognitive. This corrects classical criticism. Absolute truth can be part of a non-conceptual cognitive act. Here we take a step further than Kant. The two styles, providing stylistic dynamism to the idiom, bring in the variations necessary to keep the text open and unfolding. They do not interpenetrate, but form a counterpoint running through the text. To allow the reader to identify false doors, meaningless letters or collections of letters, the distinction between world-bound and world-transcending speculation is maintained throughout. Moreover, immanent metaphysics itself is scrutinized, dividing limit-concepts from actual infinities, regulation from constitution and architect from creator. ∫ Mistrusting the written word while composing a story or a system, accepting subjective bias from the first inklings of conceptual thought and keeping the efficient nature of conventionality intact, invites the reader to find his or her own path to absolute truth. This retains the Socratic intent. ζ. Creative unfoldment gives way to unforeseen momentary interactions born out of ambiguity, redundancy and free associations running parallel with the object-dependent channel. Because of this structure, it does not involve automatic writing, but does make use of a surrealist psychic mechanism, a "waiting" birthing unexpected encounters bearing novelty. Metaphysics is therefore also a work of art. ∫ Waiting is the awareness of the conventional reality we find ourselves in hand in hand with the intervention of the most unlimited freedom ready to deeply move us and bring about novelty. Freedom is this total openness to what is possible, a negation and denial of what is thought impossible. Our limitations are to a very large extend self-imposed. Critical perspectivism is the idiom of this metaphysics of process. It brings into view three fundamental perspectives : the immediate, the mediate and the absolute. The immediate context is what is given hic et nunc. Foremost a first person perspective, it directly demonstrates to us the singularity of the act of cognition. In conceptual thought, the concept, by symbolizing object/subject relationships, mediates between the knower and the known. This always involves an interpretation, a unique perspective. The mediate context has intersubjective concepts validated by consensus. When valid, this conventional knowledge works but is deceptive. While actually other-powered, objects are apprehended as self-powered, possessing a nature or essence of their own, separate & independent form other objects, while this can never be found to be the case. While it is true sensate objects are imputed on a perceptive base, they never appear without a large set of mental objects. The absolute perspective, ultimate nature of phenomena or absolute truth of the absolute Real-Ideal cannot be apprehended, but only conceptually approached by using a non-affirming negation. Not sheer nothingness nor a void, it is never some thing separate from actual objects. Hence, to frame its totalizing view on the world, immanent metaphysics must never use actual infinities, but only limit-concepts. This perspectivistic idiom tries to bring into balance the counterpoint of object-dependent & imaginal styles. A few important themes stand out : a consequent sensitivity for integrating objective & subjective perspectives in all areas of speculative interest ; maintaining the difference between a regulative and a constitutive use of concepts ; a radical division between immanent & transcendent speculative activities and finally, providing speculative arguments backing the idea of a "Grand Architect of the Universe", a Corpus, Anima & Spiritus Mundi, or supermind, rather than arguing in favour of the arising of the world from the activity of an omnipotent "Creator God", a "King of Kings" able to will all of this "ex nihilo". Why not ? This "substance of substances" cannot be found ! § 5  The Style of Process Metaphysics. α. Natural languages resemble the objectifying convictions of their users. Nouns and the adjectives qualifying them refer to objects existing apart from other objects. Verbs and the adverbs qualifying them refer to actions between these independent, self-contained, self-powered, separate entities. β. Awareness of full-emptiness, embracing the process-nature of all possible objects and their interdependence, understands nouns as momentary labels placed on the ongoing stream of actual occasions. These moments do not exist on their own, as it were constituting the stream, but are interconnected with all other moments of the stream. The unit of the stream is therefore the differential moment (dt), i.e. an infinitesimal interval, an instance, droplet or isthmus of actuality. The differential moment has architecture, a capacity to shape novelty in what, without this, would only be an efficient transmission of the probabilities of momentum & position (unqualified by architecture and sentience). γ. Seeking a language of process is not like wanting to find a new kind of speech. Nor is it a meta-language counterpointing natural languages. Attending speech and being attentive to conceptual anchors leading to reification and enduring (eternalizing) architectures does not call for a special verbal or written discipline. It merely accompanies the intent of every speech-act. In texts therefore, a recurrent undermining of essentialism is at hand. ∫ In seeking to meet the king, process philosophers only experience his kingdom. They never meet him face to face. Relinquishing the seeking itself is the end of philosophy and the beginning of mysticism. δ. The "I-am-telling-You"-approach of historical process metaphysics invites the reader to develop his or her own arguments. The basics are given, but the unfoldment of the text in the minds of the readers is left open. More than a passive registrator of what is meant, the auditorium is a co-creator of and a contributor to the creative unfoldment of the text. Hence, mere words exceed the text and bring about outspoken reactions. This coalescence may turn it into a cultural object : a tissue of interconnected seeds and their recurrent fruition. The main linguistic problem the text of this metaphysics of process encounters, is the noun- and verb-structure of language. A noun tends to represent a fixed continuum, unchanging relative to the adjectives. In traditional formal logic, the proposition is divided into subject & predicates, in substance & accidents. The former is stable, the latter prone to change. However, any label captures a moving, ever-changing phenomenon, or set of actual occasions. The object signified is not as "fixed" as the symbol signifying it. Language betrays substance-thinking. Not only is there a logocentric misrepresentation, but on top of that not a single word is adequate enough to convey process. Unfortunately, we have to row with what we have. Artificial languages may solve many problems, except being unintelligible for the large majority of human beings. The singular, momentary actual occasion x has differential extension. Every possible property, attribute or aspect characterizing it represents a process, not a substance or ¬ x. Thus, x is to be written as xΔ,with Δ representing, for all possible properties Σp of this instance x of the set of all actual occasions, the totality of its differential extensions. If time is the only property of x, then x.Δdt prevails. Like the water of a river, the bases of perception and mental constructs constantly change. The labels catching these translate them into components of our natural languages. At best, namely as valid empirico-formal knowledge, they truly represent, for the time being, the dynamical features of the water as determined by the morphology of the riverbed, the volume of the water, its momentum, and obstacles in the river, etc. But these conventional truths are mistaken representations. Objects appear as separate and independent, while in truth they are interconnected and interdependent. There is no "water", but merely a label imputed on a perceptive base turned into a sensation. The vastness of this network makes it impossible to represent this in any known language. Even our most sophisticated words fail us dearly. And if we use artificial languages, the issue becomes elitist, like understanding the logic & mathematics of the Schrödinger equation. Process metaphysics wants to understand the stream. It catches the swimmer in the act of swimming. Studying & reflecting, it tries to find out the style of the movement, the features of the ongoing dynamism or kinetography defining the architecture of this movement ... Process philosophy is therefore a kind of kinetography. And movement is more than just moving, sound is more than mere noise. What is added is a certain awesome dynamical symmetry. B. Opposition, Reduction & Discordant Truce. To apprehend in a comprehensive way how all things hang together, forming a Gestalt or mandala of possibilities and their relationships, and to try to affirm this in a coherent way, accommodating a reasonable view of the world, seeing it as a whole, satisfies the metaphysical instinct. But to generate such an articulate worldview is not without methodological problems. The most basic of these is not the coordination of all possible domains of knowledge necessary to make this integration happen (leading to a compromise between attention for parts and for the whole), but the choice of axioms, i.e. propositions not susceptible of proof or disproof, but assumed to be self-evident and so above all suspicion. Besides its Axiomatic Base, a metaphysical project, in every case Herculean, may choose one of the following methods : 1. comparative : first a series of basic concepts like "being", "life", "time", "consciousness", "group", "energy", etc. are chosen and, to arrive at a global view, the history of these compared. One replaces the mandala of one single domain of knowledge with the study of a single foundational concept of that domain. This approach, found in academic courses on metaphysics, is necessary but rather atomistic and so merely a preparation for more serious work ; 2. subjective : here, a single person gives way, possibly in an imaginal style, to what he or she knows, beliefs and/or feels, bringing a small area to a very high level of articulate consciousness. Although highly subjective, this will -given this person's information is not too restricted- serve to prepare a deeper and more extended view ; 3. synthetic : finally, one tries to erect a worldview using all relevant information available within a given time frame. Historical examples of this method are the corpora of Aristotle & Bacon. At present, the interval would obviously extend between the Age of Enlightenment and postmodernism. Such synthetic activity depends on the number of knowledge domains integrated, as well on the validity of the assembled information. These synthetic efforts are never "finished", but merely represent the best possible global picture available. It needs to be corrected and completed by succeeding generations. Grasping how both an extensive treatment of details and a comprehensive global construction will not eliminate all possible lack of clarity, one realizes a complete synthesis will not be arrived at. Some terms may remain foggy or incoherent. Of course, a sincere author tries to do away with these "inadequacies" as much as possible ... Nevertheless, the brontosauric aims of both analytical philosophy (focusing on details), as put into evidence in the Principia Mathematica, and grand speculative stories like Sein und Zeit are bracketed. Indeed, these efforts remained incomplete ... But, in a world knowing Gödel, is completeness wanted ? Given the global dimensions of criticism today, the construction of such a synthetic metaphysical worldview is not a "modern" endeavour restricted to Western culture (as it obviously was in the past), but is necessarily multi-cultural and so hypermodern, incorparating the best of both Western & Eastern views. Because it no longer lingers to merely deconstruct modernism, relinquishes radical relativism and tries to erect an "open" grand story, it also supersedes postmodernism. The latter remained too destructive and sceptical and so basically infertile, barren. Indeed, scepticism and dogmatism are to be avoided. Only criticism, the articulation of clear distinctions, truly advances knowledge. As will become clear, radical postmodernism was also unable to reach its goal : to eliminate metaphysics ! Hail to the foremost spirit of the Western Renaissance and the highest honorary salute to the Masters of Wisdom of the East ! Let us point to six sources aiding the construction of a contemporary synthetic worldview embracing a critical metaphysics : 1. science : valid empirico-formal propositions point to facts all possible concerned sign-interpreters for the moment accept as true. They form the current paradigm, featuring a tenacious, regular knowledge-core, a co-relative field containing all domains of scientific knowledge and at its fringe a periphery touching semi-science, proto-science & metaphysics. At hand is the production of provisional, probable & coherent empirico-formal, scientific knowledge held to be true. The core sources of knowledge are experimentation & argumentation (cf. Criticosynthesis, 2008, chapter 2) ; 2. ethics : if science aims at knowledge and truth, ethics is primarily concerned with volition (the source of action) and the good. Here we articulate judgments pertaining to the good (the just, fair & right), providing maxims for what must be done. The core sources of this good action we seek are objectively duty & calling and subjectively intent & conscience (cf. Criticosynthesis, 2008, chapter 3). Accommodating valid conventional knowledge or science, metaphysics is aware of the normative principles, norms & maxims of ethics. The reason is clear : as soon as anthropological issues arise, one cannot speculate without considering the rules covering good action ; 3. politics : ethical concerns lead to views on the organization of just, fair and right societies. Worldwide democracy is gaining ground for the right of individuals to decide what happens to them in society is a logical extension of critical ethics. Because tirany & dictatorships, whether religious, nationalistic, elective or otherwise, contradict the normative rules of ethics, they must eventually crumble. No metaphysics can be unaware of this. The core source of a good society is the educated choice of its peoples. Of course, democracy can be organized in many ways. In the West, a strong opposition is deemed necessary to fuel debate and to guarantee a variety of opinions circulate. This a Greek streak. In the East, a common goal for the betterment of the majority is deemed more important than opposition, debate and regulated conflict often infringing respect (despite Lao-tze & Chuang-tze, the East favours Confucianism). Clearly, speculating on the actual meaning of human life cannot be done without incorporating politics ; 4. economy : ethics & politics need a system to organize the scarcity of material goods & services in a good way. Solving the energy-problem is the source of an adequate solution satisfying the needs of all sentient beings. Only green energy is a viable solution, for humanity is no longer allowed to plunder Nature without severe & very costly retributions. Technology links economy and science. Bridled by ethics and democracy, these then lead to an efficient & ecological (sustainable) economy. Speculating on how the interaction between science, ethics & politics can be used to satisfy needs by way of goods & services calls for economy and its laws ; 5. art : judgments pertaining to what we hope others may imitate, namely the beauty of excellent & exemplary states of matter, are objectively based on sensate & evocative aesthetic features and subjectively depend on one's aesthetic attitude (cf. Criticosynthesis, 2008, chapter 5). Its source is feeling and its aim the beautiful. A good, global democracy organizing an efficient economy, taking advantage of valid science is therefore not enough. Human beings seek to express their feelings in ways others like or dislike to imitate. A metaphysics has to incorporate the beautiful in terms of harmony, unity, symmetry & asymmetry. Not only because human beings love beauty, but also because (a) Nature is basically an architecture of symmetry and symmetry-breaks and (b) a hypermodern understanding of the Divine integrates concepts like harmony, unity and probabilities leading to these  ; 6. religion : insofar as the Divine (cf. Criticosynthesis, 2008, chapter 7) is part of our metaphysical inquiries about the world, it cannot be more than a "spiritus mundi" remaining, as the Stoic "pneuma", within the order of the world, never transcending worldly possibilities. Then, the Divine does not transcend the world, but merely defines its outer limit. Not explaining Nature from without, it helps to understand its conservation & design, leading to the concept of the "Architect of the World". To connect the order of the world with the idea of some thing outside the world, to not exclusively define immanence by way of limit-concepts but indeed envisage actual infinities, is to move our religious attitude outside Nature, beyond the world. Logic teaches such a transcendent signifier cannot be conceptualized. But can it be cognized ? The possibility of a "cognitio Dei experimentalis" has to be envisaged, but can never be "proven". Such mystical experience is ineffable, object of un-saying. Of course, an immanent conceptualization of the Divine is a powerful source of inspiration for metaphysics. Besides being the object of a personal experience, it can be backed by arguments (like the argument of conservation, the argument of design and the wager-argument). Transcendent metaphysics can be sublime poetry and sublime poetry may influence the conceptual mind. These six sources aiding are used to develop an (immanent) metaphysics of process calling for (a) a comprehensive, totalizing metaphysical worldview incorporating both natural and social realities, and this in tune with (b) a logical study of language and science, making room for (c) the expression of direct experience and nondual, non-conceptual cognition. Of course, it will be impossible to cover all possible speculative objects. Not only because all known objects form a very vast body of knowledge, impossible to fully & completely synthetize by a single mind, but also because new objects are not to be excluded. A priori these cannot be covered. Also, it is inevitable some areas will receive more attention than others. Indeed, the metaphysics discussed in the present text will focus on being, cosmogenesis, biogenesis, sentience, anthropogenesis & the question of the Divine. It will not cover economy & politics. In general metaphysics, the idealized totality presents itself as an organic unity & pluralistic integration of process. An ontological scheme is developed & argued. In its application, as in specific metaphysics, phenomena relevant to the details of the totalized view, are integrated. § 1 The Axiomatic Base. α. The five postulates advanced by Russell in his Human Knowledge can be summarized as follows : (1) the world is composed of more or less permanent things. A "thing" is a part staying invariant under certain operations and constant during a certain time with respect to certain properties ; (2) causes and effects of events remain restricted to a certain part of the previous or succeeding total state ; (3) causality diffuses continuously (with contiguous links), so there is no actio-in-distans ; (4) if structurally similar complex events are ordered in the vicinity of a central event acting as a center, then they belong to the causal series pertaining to that center ; (5) if A looks like B, and both were observed together, one may suppose that if A is again observed and B not, B will nevertheless happen. The first postulate affirms things are more or less permanent. Russell was aware things change, but he refused to impute impermanence as one of the fundamental signs of existence. Permanency, invariance and constancy are given preference over impermanency, variability and change, or, more precisely, process-based creativity or novelty. Was this Russell's Platonic, Greek bias ? Process thinking does not posit permanency, but advances the cycle of arising, abiding & ceasing, i.e. the dependent-arising ("pratîtya-samutpâda") of phenomena. The world is composed of emerging actual occurrences. These stay around for a while and then cease to exist as such, entering into the creative advance of succeeding actual occurrences and their togetherness as events, objects, entities, things ... The second postulate, besides limiting determinations and conditions to causality, restricts the spatiotemporal influence of causality. Of course, as chaos-theory proved, small causes may have large effects (cf. the Butterfly-effect). The third postulate conflicts with quantum mechanics, for its non-locality underlines the absence of Einstein-separated events in the realm of physical reality. The fourth postulate connects structural similarities with causality, while the fifth postulate turns the psychological mechanism of habituation into a source of knowledge. This can only be realized, if A and B are indeed deemed permanent. Adding "more or less" does not change this. These postulates show what happens when the Axiomatic Base it too narrow, too much concerned with identifying identities and less with grasping how "things" emerge out of the sea of ongoing process. ∫ Russell considers realism, with its adjacent notions of permanency and a direct sensuous access to objects, as the hallmark of sanity. Is this not like confirming suffering ? Only those who know they possess nothing can never loose anything. The root cause of this insatisfaction is superimposing static concepts on fundamentally transient phenomena. This essentialist fallacy, accepting objects must have some unchanging core, makes us cling to the same thing even if nothing stays identical. β. The First Postulate, or basic conviction, is : there is a world, a Nature, a universe, or, in other words : all possible phenomena, all what actually is, exists. This aims at maximal totality, a system encompassing all possible systems. Our Second Postulate affirms the totality of the world has a world-ground. This is the sufficient ground of the world, i.e. no deeper level can be found. This ground is however not substantial or self-sufficient. The crucial difference here lies between a self-sufficient reified ground and a process-based, non-substantial sufficient ground. The Third Postulate defines the building-blocks of all what exists in the world as actual occasions. ∫ Thinking there is some better "world" outside the world makes us hope to attain it and fear not to. But accepting the existing world is all we have, brings in the care for every moment of it. γ. The world is the totality of all actual phenomena, the set of all concrete actual occasions, events, entities & things part of the world. concrete actual occasions, events, entities & things given by experience sufficient ground, process-based abstract formative potentiality γ.1 As a set of formative elements, the world-ground is merely the sheer possibility of the world. The world-ground is only the possibility of the next moment of the world itself.  World & world-ground define the world-system. If the ground of the world is merely the possibility of the world, then the actualities of the world are not determined by a substantial transcendent origin outside the world ; they are not otherworldly. γ.2 There is no transcendent self-sufficient ground "outside" the world. The world-ground is a set of ontological principles concerning the primordial and the pre-existent. In process thought, these are merely formative elements necessary to think the next moment of the actual world. They do not stand alone, neither do they act as "creative" principles bringing forth the world. They are a set of process-based roots drawn -by reversal- from the domains of actuality characterizing the world, namely matter, information and consciousness. This is the hermeneutical circularity necessary to eliminate any hint of an ontological divide between the world and its ground. Nevertheless, the world is finite & relative, the world-ground infinite & absolute. ∫ The world-ground is the servant of the world, it does not create it. γ.3 Just imagine an absolute substance "outside" the world, a substantial, self-sufficient world-ground indeed causing the world to come into existence "ex nihilo". Then, the world would depend on something eternal existing from its own side. As in Platonism, the world would be divided in two ontological layers : a perfect world of static eternities and an imperfect world of relative becoming. This view is firmly rejected. In actuality, there is only the world and nothing else. Indeed, as ultimate logic shows, a substance cannot be found. critical : concrete actuality made likely by the primordial sufficient ground of process ; traditional : the mere modification of the primordial own-nature of all things ; critical : sufficient ground but process-based : the primordial possibility of change ; traditional : self-sufficient and thus substantial : the primordial own-nature of all things. γ.4  The "transcendent" speculations of critical metaphysics do not have an absolute self-sufficient, self-powered substance acting as world-ground "outside" the world, but an ultimate nature which is the property of every single actual instance of this totality. The "transcendence" posited is not beyond, above, outside or next to the world. The world-ground, being merely a formative abstract, has no spatiotemporal characteristics. Traditional reified (essentialist) transcendence is not at hand. The object of this transcendent metaphysics is not an eternal, self-sufficient "entity of entities" or "substance of substances". The transcendence aimed at is not a Greek God ! If a transcendental signifier can be identified (albeit by the thorough application of the non-affirmative negation), then this ultimate reality is not a substantial, self-sufficient world-transcendent ground. Absolute reality, as the sufficient ground of every possible phenomenon, is actualized by every phenomenon. ∫ Platonic ontology betrays the deep aristocratic discontent with change, impermanence and seemingly disconnected variety. Wherever it creeps in, cherishing others is eclipsed by the rubble of the few. finite, spatiotemporal, concrete, actual, relative, conventional infinite, non-spatiotemporal, abstract, formative, absolute, ultimate δ. Traditional transcendent metaphysics affirms its object to exist as a substance with inherent properties and not part of the world. But how can this onto-theology be ? If this self-powered supreme & infinite object is conceptualized, then an affirmative negation is at hand, i.e. one positing something outside, above, beyond or next to the world. Such an object must be obvious, but cannot be found, is lacking. Moreover, how can the finite grasp the infinite ? If this is denied, then nondual, non-conceptual cognition of the mind of Clear Light* does not exist. If affirmed, then how explain the tangential moment the world and its ground touch ? ∫ Onto-theology leads to the antics of Baron von Münchhausen. In actuality, there is a single world. There is nothing "outside" or "next to" or "beyond" or "above" this world. The topological view is rejected. Although the world has a world-ground, the latter is not a substantial reality not part of the world, but a propensity acting as the sufficient ground of the world. This sufficient ground is the absolute absence of inherent existence. This lack of substance is the primordial condition for anything to happen. Platonism is firmly rejected. This does not lead to a rejection of a deconstructed transcendent in metaphysics, but to an eliminaton of its traditional object : a substantial actual infinity (the God* of process is an actual infinity, but not a substance). The transcendent nature of phenomenon A is not a different object B, but a different epistemic isolate of A. The "sacred" dimension of the world is found in each and every "profane" actual occasion, event, entity or object. This by ending all substantial instantiation, completely purifying the conceptual mind. The totality of the world is all what is actually happening. The world-ground, transcending this concreteness, is not a substantial actual infinity, but a process-based formative abstract. Transcendence and immanence are not in conflict, for every object manifests a conventional nature and an absolute nature, and this without the latter being ontologically different. Only God* is (again !) the Big Exception. S/He is a process-based actual infinity ! Being actual, God* (in immanence) is not merely potential, not merely formative and therefore not merely abstract. Being also abstract, God* (in transcendence) is not a concrete actuality of the world, not an actual occasions like any other, but an absolute & infinite singularity (cf. infra). § 2 Monism, Dualism or Pluralism. α. The axiomatic choice for monism is in tune with the need for unity, simplicity, elegance and comprehensiveness. The monad does not move beyond itself, but privileges a single principle. In this monarchic continuum, alteriority is not a different ontological entity, but a mere replication of the existing principle. This implies all things are interchangeable, for although ontological distinctness may be accepted, ontological differences nowhere occur. ∫ Can everything be explained by the privileged monad ? If so, then by Ockham's Razor we keep it simple. But if a single case can be found where the principle does not apply, then a forteriori monism is wrong. β. Duality, with its powerful reflective capacities, introduces otherness as a new ontological entity. The power of duality is felt in logic and epistemology. Reflection on the structure of thought itself reveals a binary structure, erected on the principles of the transcendental logic of thought itself, namely the crucial & necessary divide between a transcendental subject and a transcendental object. The armed truce between object & subject can also be felt in epistemology, for to arrive at valid knowledge, both theory & experimentation are necessary and observation is not a passive, merely registering process. ∫ On the one hand, Descartes was correct in emphatically making the difference between the extended and the non-extended, between matter and mind. On the other hand, Cartesius was wrong to reify the difference, shaping an ontological dualism. Although both are distinct, they are not different. This crucial distinction leads back to monism. γ. Non-monists logics introduce more than one fundamental ontological principle (a duality, triplicity, quaternio, etc.). Ontological dualism posits two independent substances : matter versus mind.  By a trinity of factors, a logical closure ensues, for by adding a third principle, a tertium comparationis, duality is not longer "locked" in singular division, no longer the nature morte of the "dead bones" of formal logic (Hegel), but indeed becomes an "unlocked", plural process capable of thinking the manifold. In many ways, triadism is well equipped to deal with manifolds and their processes. Of course, this pluralism merely multiplies the difficulties, for if it is unclear how two substances may interact, then how to explain an ontological triad or anything beyond two ontological principles ? ∫ By the multiplication of principles one does not solve the problem of unity, quite on the contrary. Unity can only be systematized by the monad. Ontological elegance, coherence (orderly relation of parts) and simplicity are born out of the monad and nothing else. δ. To couple monism with essentialism introduces a single ontological substance. The monad is then positioned as independent and self-powered and turned into a static self-sufficient ground existing from its own side, inherently. Such an approach has difficulty explaining the multiplicity, variety, differentiation, complexity, richness & interconnectedness of the manifold. Hence, the ongoing changes & novelty happening in Nature cannot be explained. ∫ In traditional theology, the Divine was turned into an idol in the image of the Egyptian, Persian and Greco-Roman rulers. This has sterilized religious thought. The challenge at hand is to accept a universal cognizing luminosity, a mind of Clear Light*, without the dogma of an aboriginal, unmoved, inherently existing transcendence, at whose fiat the world was created and who's will it must obey to avoid punishments. To remove such paternalistic substantialism from theology is the only way forward. God* is not above, beyond, next to or therefore not not a part of the world but with the world. ε. Thinking a single dynamic principle is the solution sought. Because of the monad, all phenomena fall under the same ontological principle, leading to the absence of ontological rifts. Avoiding essentialism brings in maximal interchangeability, knitting the various textures of existence together, thus interlacing the fabric of Nature, accommodating the organic, interdependent whole it obviously is. ∫ Dynamical monism may accept the presence of a supreme dancer, a sublime movement executed with Divine grace. Such perfect symmetry transformations, the "holomovement of holomovements" of God*, continuously have all other actual occasions as reference frame. The absolute is present as an ultimate differential in every point of Nature, in every concrete actual occasion of the world. The ontological principal of the single world-system is a single principle or monad. Monism guarantees our understanding of the world does not assume ontological differences, while thinking the monad as process-bound ends the search for a static first principle, the assumption of a single, unchanging self-subsisting essence or core. The essentialist fallacy is avoided. Although axiomatic, logically monism has definite advantages over dualism & pluralism. In the latter cases, the interaction between the separate principles, defining an ontological difference, becomes problematic. Although the possibility of distinct actual occasions, events, entities and objects is accepted, the notion they fundamentally represent different static pockets in the ontology of the world is rejected. All compounded things are impermanent, ongoingly arising, abiding & ceasing ; this not randomly, but swimmingly. § 3 Critical Epistemology. α. Before Kant, in the pre-critical era of Western philosophy, being defined (conceptual) knowing. The question of the capacity of our human cognitive apparatus was answered by referring to ontology, introducing one, two or more ontological principles first. As a result, the natural limitations of cognitive activity were either exceeded (as in dogmatism) or narrowed down (as in scepticism). ∫ The drama of conceptual cognition is exaggeration, or moving to extremes, making something more noticeable than necessary. This makes one seek a hypokeimenon, an underlying substance or ultimate thing. This illusion is then carried through. A tragi-comedy. β. The word "criticism" derives from the Greek "kritikós" or "able to discern". In turn, this leads to "krités", or  a person who offers reasoned discernment. Criticism defines borders, frontiers & waymarks. β.1 These demarcations do not negate anything (as does scepticism), nor do they affirm (as does dogmatism), but merely posit distinctions enabling us to remove entanglements and create open spaces or clearings offering breathing-spaces between otherwise ensnared objects (cf. Criticosynthesis, 2008, chapter 2). Because of these, differences & distinctions are possible. β.2 Hence, this "Critique of a Metaphysics of Process" intends to discern the place of a critical metaphysics based not on substance but on process, not on fixating (the eternal or the void), but on thinking constant change and therefore impermanence. It identifies the field of metaphysics by outwardly demarcating it from science and inwardly defining its main targets, to wit totality and infinity, or, in other words, the conventional wholeness and the ultimate suchness of all possible phenomena, the world and the world-ground respectively. ∫ Executing their perfected perfect styles of movement, ultimate dancers simultaneously portray the impermanence of constant, interdependent change, as well as the permanence in the pure kinetographic style of their holomovements. γ. Critical epistemology answers the question how conceptual knowledge and its advancement (production) is possible ? It does not base this analysis on some previously given ontological ground. Reality (accessed through the senses), nor ideality (apprehended by the mind) are deemed pre-cognitive things triggering the possibility of knowledge. The latter is given by the groundless ground of knowledge itself, the Factum Rationis. Hence, the mode of analysis is transcendental ; its object is the structure of the cognitive apparatus, and its subject the reflective activity of the knower, bringing out the principles, norms & maxims of (valid) knowledge by merely disclosing the rules already given in every cognitive act, i.e. what is going on as soon as thought is afoot. ∫ The rational mind is not only formal, but also transcendental. Not only does it produce valid empirico-formal propositions, but also the structure of conditions (on the side of the knower) making it possible for such propositions to be produced. Critical metaphysics differs from all previous speculative systems in its radical abandonment of substantial thinking, of grounding the mind a priori in anything except in the groundlessness of the mind itself. δ. Critical epistemology is not a descriptive activity. Why not ? There is no vantage point outside knowledge empowering us to watch knowledge as such. The possibility of knowledge is apprehended while knowing. The principles, norms and maxims are unveiled in the cognitive act itself, and this by way of reflection. These rules cannot be negated without negating the negating activity itself. Doing so always entails a contradictio in actu exercito. Hence, epistemology is a normative discipline, and its rules are those being used by all possible thinkers of all times. ∫ Valid science must be about experimentation (testing) and dialogue (with dissensus, argumentation & consensus). Valid metaphysics must argue a totalizing worldview embracing the infinite. ε. Positing an Archimedean point outside knowledge grounding knowledge, is a pre-critical strategy ontologizing the possibility of (conceptual) knowledge. This presupposes the presence of an unchanging (fixating) ground outside knowledge. Per definition such a ground cannot be knowledge at all ! ε.1 Such an incorrect view calls for a dogmatic ontology, one placing "being" before "knowing". As such pre-critical thinking is merely an elimination of the necessary tension or concordia discors between the knower and the known, between the subject and the object of thought, either involving the affirmation of the real or of the ideal. In the former case, extra-mental reality is deemed a real self-sufficient ground for the possibility of knowledge. In the latter case, mentality itself is considered to be the underlying ideal self-sufficient ground. ε.2 Both ontological realism and ontological idealism generate inconsistent answers to the fundamental question of epistemology and so pervert a reasonable solution to the problem of conceptual knowledge and its validation & production. ∫ Totalizing knowledge and proposing a comprehensive worldview does entail a narrow interaction between critical metaphysics and science. This to fructify speculative activity with current views in physics, cosmology, biology, anthropology etc. The possibility of conceptual knowledge and its validation involves critical epistemology, a normative discipline unearthing the rules of knowledge by way of a reflective, transcendental analysis staying within the borders of possible knowledge itself. To precede epistemology with ontology was the way of pre-critical thought, immunizing reality or ideality before analyzing the actual capacity of our cognitive apparatus. The capacity of conceptual thought is exceeded by the "urge for Being" found in substantialism and essentialism. Ontological realism posits a world existing independently from thought. But at no point can it impute anything without the knower. Ontological idealism affirms a "pure" mentality constituting the extra-mental. But knowledge is always about some thing. As criticism shows, both do not lead to an epistemology free from the scandals of contradictions & antinomies. § 4 Conflictual Model. α. Because of the inflation of (mythical & theological) metaphysics in pre-modern times, modern philosophy has invoked a radical conflict between speculative activity per se and scientific thought. This created a division between scientific knowledge and non-scientific opinions. While the latter are accepted as valid in their own private sphere, they play no role in the domain of science. The latter is a privileged language-game dealing with the objects of public life, while the former is merely of personal interest and so considered highly subjective & intimate. ∫ One cannot push away all possible speculative activity. Only invalid metaphysics must be abandoned, not metaphysics as such. The tensions between organized religions and science, between faith and valid knowledge, between "alternative" (peripheral) and paradigmatic interests, etc. reflect the conflict between paradigmatic and non-paradigmatic knowledge. Two important cultural objects arise : on the one hand, an "ideal" religious faith based on "grace" (the use of speculation without science) and, on the other hand, "real" scientific facts based on experiments (or science without metaphysics). Merely talking over each others heads, they behave as deaf men arguing. ∫ History put asides, science cannot divorce metaphysics. They are a dual-union participating in the concordia discors of conceptual thinking as such. γ. The conflictual model, feeding an insurmountable conflict between science (the valid empirico-formal propositions forming the paradigm) and pre-critical metaphysics, inhibits speculative activity. Indeed, trying to remove the so-called infection caused by this wrong kind of metaphysics paralyzes theoretical philosophy. Resignation is the outcome. In this way, giving up the attempt to articulate a totalizing view on the world, the treasure-house of cultural objects impoverishes. Reducing the heuristic impact of speculation in this way, decreases the production of knowledge. It also plunges epistemology into darkness, for the unavoidable role of metaphysical background information in both testing, theoretizing and arguing is poignant. ∫ The Gestalt switch invoked by the "cube" of Wittgenstein (TLP 5.5423) shows attention defines observation. Positing a conflict between science and metaphysics, the conflictual model divides the field of knowledge into two separate domains. Accepting the presence of metaphysics, it nevertheless promotes the path of science and relegates speculative interests to one's private life. This approach is also found in the modern division between religion and science. While the former is accepted as part of human cultures, the latter is deemed the sole guardian of objectivity. This results in a depreciation of theoretical philosophy. The conflictual model is rejected. § 5 Reductionist Model. α. The reductionist goes a step further and tries to entirely ban metaphysics from the arena of thought. Only science has anything to say about the world and all non-scientific entries are worthless and so to be disposed of. There are no two distinct sources of truth, but only one, namely science. Logical positivism is a good example of this approach. ∫ Radicalizing against the flow of irrationalisms, one tends to overreact and propose a silly solution emitting a flair of intelligence. Irrationalism cannot be avoided, only handled properly. β. One may also try to cancel out metaphysics by pretending to have access to an absolute knowledge, one needing no further speculation. This Hegelian approach is a super-Platonic strategy. It fails because it presupposes a Herculean conceptual capacity conflicting with a critical reflection on the possibilities of conceptual knowledge. As will become clear when analyzing the nondual mode of cognition, this only works if and only if this absolute knowledge is absolutely ineffable, thus cancelling out its direct conceptual involvement. One may also invoke the supremacy of scientific knowledge, claiming it is totally free from any dealings with metaphysics. This also fails, because both theory & experiment always presuppose metaphysical background information. ∫ Why cut the branch upon which one sits and then be sorry one falls ? γ. The escalation from conflict to reduction increases the intensity of the attack and decreases any possibility of a constructive return. ∫ Intelligence is able to change its mind. The elimination of metaphysics is an attempt to exceed speculation or to laud the activity of scientific methodology, based on repeatable experiments & coherent argumentation. Inflating conceptual thought leads to meta-rationality at the expense of rationality, endorsing dogmatic conceptualizations and the occultation of the factual. Such a strategy breeds fundamentalism, irrationalism and the dictates of nonsense. While a direct experience of absolute truth is possible, it cannot be conceptualized. Privileging access to the objective enthrones science, giving it an inviolate authority leading to instrumentation and fragmentation. Both are rejected. At both ends, the reductionist model fails. § 6 Metaphysics & Criticism. α. A frontal attack of metaphysics, trying to remove it from thought, only manifests how metaphysics remains present in the attacker. The "intentio recta" battling metaphysics in the open field, unveils it as an "intentio obliqua" surreptitious at work in the would-be eliminator. To argue an untestable totalizing view is therefore a "vis a tergo" one cannot escape. ∫ Like the eye cannot see itself, science has a blind spot filled in by metaphysics. One tries to escape only to return. Let us accept this and move on. β. Criticism does not try to animate a conflict with metaphysics, nor does it want to eliminate it. It accepts the abyss between science & metaphysics, but tries to bridge it. Metaphysics, the speculative integration of the totality of phenomena born out of infinity, is capable of being supported by arguments, but cannot be put to the test. The latter distinguishes it from scientific statements, both arguable and testable. γ. Aware metaphysics is part of every possible cognitive activity, criticism merely tries to find the rules covering its use. Negatively, it criticizes metaphysics as an ontology or archaeology of the normative disciplines. Epistemology, ethics and aesthetics must not be rooted in a self-sufficient ground outside knowledge, as it were preceding it. Doing so cripples the understanding of how knowledge and its production are possible. This leads to unworkable antinomies, as Kant showed. Positively, a rehabilitation of metaphysics is at hand. As a critical metaphysics, it acts as a heuristic or teleology of science, advancing speculative notions, concepts & systems. As an "ars inveniendi" it inspires science to move beyond the periphery of its current paradigm, but never without asking it to relinquish its two wings : experiment & argument. δ. The distinction to be drawn then is between pre-critical and critical metaphysics. The former is a mythical & theological speculative format, invoking being to explain knowing and multiplying entities. The latter is a totalizing picture of what exists as emerging out of infinity. This conveys awareness of the limitations of knowledge , but is nevertheless able to serve as a heuristic of science. It tries to find a single founding principle and argue the totality of phenomena (the world) made possible by the set of infinite possibilities (the world-ground). ∫ Without a single unifying principle, the unity of the manifold cannot be thought. ε. As a philosophical discipline in its own right, critical metaphysics encompasses both totality & infinity. Pre-critical, dogmatic, foundational metaphysics, positing a self-sufficient, substantial ground before an ultimate analysis of the possibilities of cognition and the cognizer, asks us to suspend understanding to the advantage of systems of substances a priori. This attempt reifies infinity, turning into a "substance of substances". Not so here. Advancing arguments to understand the world comprehensively, critical (immanent) metaphysics asks about being, cosmos, life and sentience. ε.1 These answers help to clarify the fundamental questions posed by the human being : Who am I ? From where do I come ? Where am I going to ? The first question being the foundation of the foundation : without knowing myself how to understand anything ? This "I" not only refers to a subjective sentient & luminously cognizing center of consciousness, but also to a unique objective point of observation. ε.2 Using the realized totality as stepping stone, critical metaphysics ventures at the periphery of paradigmatic conventionality and explores infinity. First as a series of asymptotic limit-concepts of the world, next as an actual infinity, infinitely totalized as an absolute consciousness (God*). This is not an ens transcending the totality of all actual phenomena, but a series of formative abstracts with a single exception, namely God*. Discordant with ultimate logic, the Pharaonic (Platonic) intent is rejected. The absolute exists conventionally ... God* is the awareness valorizing the possibilities of the materiality & creativity of the world-ground, and the sole abstract actual occasion moving with the world. God* functions as facilitator, as a bridge between what is possible and what is concrete, touching both. Criticism accepts the importance of both immanent & transcendent metaphysics. The former is a heuristic of science and a totalizing worldview, answering fundamental questions by way of a single ontological principle. Using a penetrating analysis, the latter is posited through a special epistemic isolate, namely the realization no inherent existing object can be found. This leads to a non-affirmative identification of suchness/thatness and conventionality. This transcendent aspect is not ontological (does not define another ontological level), but epistemological (implies a change of mind). But while absolute reality can be directly apprehended (known), this does not involve any conventional cognitive activity, and is therefore utterly non-conceptual. The realization of suchness/thatness transcends conventional conceptual reason. Meta-rationality transcends rationality without unveiling a transcendent signifier. Crucially pregnant in private life, this "seeing" of full-emptiness transforms the knower. § 7 Discordant Truce. α. Transcendental logic dictates the principle of rational, conceptual thought. This may be called the concordia discors, the discordant concert or armed truce of the Factum Rationis. Duality is its architecture. α.1 On the one hand, all possible cogitation has contents, i.e. an apprehended object of knowledge or the known, and on the other hand, cogitation implies a thinker, a subject of knowledge or a knower. Both, of radically distinct interests, are nevertheless necessary and always joined, forming a bound, entangled, bi-polar system. α.2 In epistemology, these two make out the simultaneity of two state- vectors : the vector of the subject of knowledge, its languages, theories and theoretical connotations and the vector of the object of knowledge, its physical apparatus, tenacity, inertia and, so must we think, factuality & actuality. A fact is the resultant vector-product. ∫ Knowledge must be about some thing extra-mental. Neither is it possible for knowledge not to be known by a knower. β. The armed truce between subject and object of all possible thought and the groundless ground of all possible knowledge go hand in hand. Because knower and known form a pair and so cannot be reduced to one another, knowledge cannot be grounded in either objective or subjective conditions. β.1 Suppose we reduce the subject to the object, then the latter grounds the possibility of knowledge (as in ontological realism). Suppose we reduce the object to the subject, then the latter constitutes the possibility of knowledge (as in ontological idealism). β.2 Because we keep both sides of the transcendental spectrum at the same level, stressing their interdependence & co-relativity, knowledge can only be grounded in knowledge itself. γ. Shocking confrontations between object and subject of knowledge are inevitable & necessary. They cannot be avoided because the tensions between knower and known are ongoing. They are necessary because without these confrontations experiments cannot be adjusted by theory and theory cannot be falsified by facts. ∫ In the research-cell, the interests of both experiment & discourse play out in the continuous process of communication between, on the one hand, everything dealing with the test apparatus and, on the other hand, all formal and informal theoretical processes (calling for opinions, conjectures, argumentations, refutations, hypothesis & theories). δ. For more than two millennia, concept-realism was uncritically accepted. Concepts were deemed to be reliable copies of reality. δ.1 In Platonic concept-realism, one cannot avoid asking the question : How can another world be the truth of this world ? The ontological cleavage is unacceptable. On the other side, Peripatetic thought summons a psychological critique, for how can the human soul possibly know anything if not by virtue of this remarkable active intellect able to make abstractions on the basis of a manifold of independent observations ? δ.2 Both reductions are problematic. Because they try to escape, in vain, the Factum Rationis, and so represent two excesses denying the concordia discors of all possible conceptual thought, they form an apory. Plato, being an idealist, lost grip on reality (positing an outerworldly substantial ideal). Aristotle, the realist, did not fully clarify the mind (positing an abstracting active intellect). Composite forms of both systems did not avoid the conflicts, although they conceal them better. The crucial tension of thought was not solved by Greek concept-realism, crippling our understanding of formal rationality. This pollution endured until Kant broke the chains we had put on ourselves ... ∫ To attribute existence to concepts, be they related to sensate objects or instead refer to mental objects, is to step outside the duality of the object-subject relationship, claiming to oversee it and decide the ground of knowledge is either objective reality (the senses) or subjective ideality (the mind). Existence only instantiates a set of features attributed to a concept, but adds nothing of its own. Eliminate the properties contained in the set, and the object imputed vanishes. ε. When reason, understood as a stream of conceptual, discursive cognitive acts, is critically watchful and so not deluded by ontological illusions, the ideas of reason (the "Real" & the "Ideal") are not turned into ontological hypostases, but operated as regulative principles holding a hypothetical (not an apodictic) claim. In that case, conceptuality, in tune with the concordia discors, entertains a conflictual interest willingly. On the one hand, it seeks unity in the variety of natural phenomena (the multiple is reduced to a type). On the other hand, in order to guarantee the growth of knowledge, reason wants heterogeneity (the unique, not repeatable & singular). ζ. Besides the discordant truce between the objective and the subjective conditions of all possible knowledge, another concordia discors can be identified, namely between paradigmatic science & critical metaphysics. Science is the theoretically organized system of valid empirico-formal propositions or statements of fact. η. Paradigmatic science has a hard core, a set of statements deemed valid conventional knowledge, held by all involved sign-interpreters as true. The objects involved put down a high probability of recurrence and hence the highest possible relative predictability. Around this tenaciously kept paradigmatic core, covering matters objective & intersubjective, the architecture of valid conventional science unfolds. At its periphery, we find the beginning of non-science or fringe science. Critical metaphysics proves not all non-science is nonsense. η.1 On the one hand, science is factual and theoretical and critical metaphysics is only theoretical, and this in a speculative way. On the other hand, all sensate objects coming into consciousness through the senses are already compounded objects, and so have already been subjected to interpretation. η.2 So every observation of fact cannot do without the observer and his or her mental frame or view. A critical minimum of metaphysics is needed. θ. "Speculation" is not knowledge based on neither fact or investigation. Here, "speculation" refers to (a) a theoretical philosophy of what is beyond the physical and (b) "speculum", the Latin for "mirror," from "specere", or "to look at, to view". The last points to the totalizing, universalizing, all-encompassing, globalizing streak of a sound, valid  & critical metaphysics. It involves an intelligent worldview. Although critical metaphysics is not factual, its theoretical, intellectual structures are arguable. Validation is in the line of the kind of language used to convey the metaphysical view at hand. The sheer power of the combination of its chosen logic & rhetoric certainly plays a role, but not more than compass & depth. ∫ Per definition, critical metaphysics is multi-cultural and global, with a comprehensive worldview integrating as many as possible cultural objects, sensitivities and dada's. The logical conditions of thought making thinking possible convey the simultaneity of knower and known in every act of cognition, in every moment of actual knowing. Ontologies placing the knower before the known (idealisms) or those privileging the known (realisms) are pre-critical exercises in metaphysics. This needs to be identified and acknowledged. If not, ontological illusions come into play. Pre-critical thinking introduces a substance ; a self-contained, self-powered, absolutely independent, isolated and autarchic essence, a thing existing inherently, from its own side only. The extremes of the set of objects belonging to substantial thinking are the hypertrophy of the subject (the knower) and the inflation of the object (the known). The former is rooted in Platonism, the latter in Peripatetics. Both have to be superseded. If not, metaphysics (in particular ontology) is an archaeology of knowledge, grounding the possibility of conceptual thought, knowledge and its advancement in something else than the mere conditions found, namely those normative principles, norms & maxims of possible cognitive thought we have been using all the time. These conditions are ontologized. This reification introduces a "real" or an "ideal" substance to ground the possibility of thought. Moreover, it brings about an illusion causing the perversity of reason. The two sides of the logical & epistemological conditions of conceptual thought are to remain simultaneous in every act of cognition. Subjective and objective conditions remain bound together but in a constant conflict of interest. Their discordant truce allows us to understand thought, knowledge & the production of valid knowledge without scandals. Likewise, the conflict between science & metaphysics can be mediated when the interdependence between both is realized. It is impossible to dissolve this dualism. Those who try do it at their own peril and at the loss of those accepting the tenets of either ontological realism (denying all metaphysics) or ontological idealism (eliminating the role of the factual). Critical metaphysics is based on valid science, but is not a science. It is a theoretical philosophy, a totalizing speculative view of the world. § 8 The Objectivity of Sensate Objects. α. The subject of knowledge, the knower, is an object-possessor. A subject without an object is as nonexistent as a square circle. So the very act of cognition calls for duality. ∫ Although duality is not unity, dual-unions do occur. β. Two and only two kinds of objects are possessed by the knower ; sensate and mental objects. Their difference is not ontological, for both are actual occasions, events or aggregates of events. β.1 These two objects do have distinct sources. Sensate objects depend on the correct functioning of the five sensoric systems, while mental objects depend on the field of consciousness and its center, the knower. β.2 At the bottom level of perception, sensate objects are extra-mental, but at the top level of sensation or conscious sentience these naked perceptions themselves, through neurophysiological code, interpretation & labelling, have become part of the mental world, although they remain objects with particular features derived from perception, distinct from objects imputed by the activity of the mind alone. ∫ To accept the senses, is to accept we don't sense what they perceive. To accept the mind, is to accept concepts do not perceive. γ. Sensate objects are those perceived by the senses, processed by the latter, transported to the thalamus and projected on the neocortex. The latter computes the identification & naming of these afferent impulses. This turns them into sensate objects part of the field of consciousness of the knower to be observed. Hence, perception and sensation differ by their measure of interpretation. γ.1 Biologically & epistemologically, interpretation cannot be eliminated. While it can be reduced, sensate objects are always processed naked perceptive data. γ.2 Sensation and interpretation are simultaneous. The former arises as a result of stimuli influencing the sensitive surfaces of the five senses, the latter by the ongoing activity of mental processes with their particular objects and semiotics. δ. Objectivity is guaranteed because sensate objects depend on what happens at the sensitive surfaces of the five senses. Epistemologically, we must accept facts also carry the input of the world "out there". Suppose we don't, then our knowledge is no longer knowledge about some thing, but merely an intra-mental (intersubjective) phenomenon. The concordia discors is left for a reduction of the object of experience to the subject of experience (as in ontological idealism), leading to a corrupt form of epistemology, misrepresenting the possibilities of knowledge, as well as its production. ∫ Reality nor ideality are a problem. Their reification always is. ε. Objectivity is the tenacity with which sensate objects appear solitary, independent and separated from other objects. Physical reality defined by physics implies a something which is not thought, with relations not requiring they are thought about. This homogeneous approach of Nature defines the latter as constituted by the extra-mental, by the theory-transcendent aspect of facts. In the physicalist & materialist view, sensate objects are "real" because they are independent and separate from Nature being thought about. Although objectivity is stubbornly unyielding, not a single permanent sensate object is found, for every object is fundamentally a differential moment and so in process rather than revealing ipseity, own being, own becoming, own-form, intrinsic nature or substance from its own side. Hence, objectivity is always relative to the interval at hand, and this unveils conscious choice. Also spatially, subjective expectations trigger new objective perspectives. ∫ Reality and ideality are not to be avoided, but merely act as the two regulative ideas bringing, by way of correspondence and by way of consensus respectively, the two methodological sides of the process of knowledge-production to a greater unity. ζ. Without sensate objects, true conventional knowledge, i.e. the valid empirico-formal propositions of science, cannot be articulated nor validated. They, so must we assume, provide the elements not dependent on mental objects. These are not substances, but the ongoing actuality of phenomena. But although facts appear as constituted of elements independent of the mind, they are at the same time constituted by theories depending on opinion, intersubjective testing, conjecture & argumentation, yes, even on implicit or explicit metaphysical background information. Sensate objects are therefore only seemingly stable and inherently self-identical. Not to grasp this is to break away from the concordia discors and plunge reason into the scandal & folly of a "perversa ratio", like the one promoting, by lack of spirit, the "nature morte" of a dying universe without rebirth. ∫ When moving to the extreme of objectivity, subjectivity needs to be invoked ! η. Natural science's exclusive concern with thoughts about Nature, concepts not requiring they being thought, is not an ontological choice (as in ontological realism found in materialism & physicalism), but an epistemic interest or methodological concern. Natural science wants to isolate the "hard facts" as clear as possible, meaning independent of the necessity of their appearance in fields of consciousness in order for them to function. The conditions & determinations of a physical object call for the calculation of the probability of some sensate object to manifest properties. The latter reflect, so we are bound to assume, the interconnectedness between Nature stimulating the sensitive surfaces of the five senses. The recurrence of the form of definiteness at hand identifies the activity of Nature insofar as it is approached homogeneously. θ. Because all phenomena are actual occasions, natural science is able to enlarge is perspective, and integrate other families of actual occasions like information and consciousness. Together with matter, these three represent the hardware, software and userware to be studied by natural science. ∫ Redefining "phenomenon" as "actual occasion", breaks away from the identification of the object of natural science with matter. Code, symbols and information (form), as well as autoregulation & conscious observation (contents), are part of this new science of Nature. The objectivity of sensate objects is the foundation of our outer sense of reality. "Outer" in the sense of coming in through the senses, the gates informing us about what goes on "out there" (in terms of efficiency & finality). We must assume these stimuli to be independent of the operations on the side of the knower. If not, knowledge is no longer about some extra-mental thing. In that case we plunge epistemology in darkness and break away from the necessary discordant truce between objective and subjective conditions of knowledge, its production and advancement. However, the information gathered by the senses depends on the features of their sensitive surfaces, calling for different physical processes and their limitations. What is gathered on these surfaces is then translated and transported to the thalamus, coding it for reception by the neocortex. At the highest level, this information is presented to the human brain and its mind, imputing a sensate object. Objectivity refers to subjectivity. § 9 The Subjectivity of Mental Objects. α. Sensate and mental objects are those possessed or apprehended by the mind, appearing in a field of consciousness with at its center the cognizer, the knower. Sensate objects only appear when the five senses convey their perceptive information correctly to the brain, offering it (by way of interaction) to the mind and its knower (cf. Criticosynthesis, 2008, chapter 4 & A Philosophy of the Mind and Its Brain, 2009). During sensoric deprivation ("pratyâhara"), only mental objects appear. One "observes" with the "inner sense" of consciousness itself. In normal waking, both objects constantly overlap and mingle. Only with analytical attention does one notice their distinctness. β. Subjectivity is guaranteed because sensate objects themselves can only be constituted if and only if the data projected on the neocortex by the thalamus is interpreted. And the latter is not merely a computation of the neocortex, but also involves the impact of the mind independent of the brain, namely through interaction by way of (re)valuating the brain's propensity-fields. β.1 Hence, everything smelled, tasted, seen, heard or touched is already a "thing-for-us" (cf. Kant's "das Ding-für-uns") ; already an appearance of something, not the thing itself ! β.2 This Copernican Revolution reveals the core inspiration of the transcendental level of mind : to unveil, discover or reveal the mechanism of the mind enabling us to impute sensate & mental objects. The presence of these intra-mental operators make it clear sensate object merely appear as independent of the mind, and this in a very striking and convincing way. This is the quest leading to the sublime : how can something appear so strikingly different than it actually is ? ∫ Illusion ("mâyâ") is a truth-concealer, for it poisons the mind to believe a rope is a snake. Like a hallucinogenic, it makes us believe a one-winged bird truly flies. γ. Subjectivity is the invisible, intangible, non-physical, nonspatial, temporal impact of valuation, reassessment, autopoiesis, auto-structuration and conscious (sentient) choice on the contents of consciousness, i.e. on both sensate and mental objects appearing in its field and apprehended by the subject of experience, the knower, and this at every differential moment of the actual stream of consciousness hic et nunc, i.e. in every instance of its temporal ongoingness and creative advance from its beginningless past to its endless future. ∫ The subject of experience, the knower, depends on the known. The known depends on the knower. In each actuality, both are simultaneous. δ. Without mental objects, no thoughts, opinions, conjectures, hypothesis or theories could be articulated. Refuting them would also be impossible. This fact is as important as the tenacity of sensate objects, contributing to the grand spectacle of illusions offered by the conventional world and its suffering. δ.1 Both tyrannies work together to cage our understanding, forcing it to prostrate before the idol of the ideas of the Real or the Ideal. Although theories appear in an intersubjective context shared by all involved sign-interpreters, theoretical constructs, connotations, concepts and words do not replace naked perception, and the data derived from that. Idealism or the eternalism of the subject must be avoided as hard as realism, the eternalism of the object. δ.2 Also the negation of anything objective and/or subjective having any functional relevance whatsoever (annihilationism) is to be rejected. Keeping the concordia discors ever alive is accepting both objective & subjective conditions of conceptual knowledge, giving both an equal share in the production of knowledge. ∫ Moving to the extreme of subjectivity ? Call in common sense ! The subjectivity of mental objects builds our inner sense of conscious existence, our ideality. "Inner" in the sense of also appearing without sensate objects (as in sensory deprivation, sensualizations, visualisations, imaginations & dreams) and "ideal" because a sentient apprehension is a non-physical presence and self-reflective. We must accept the mind to be independent of the sensate objects appearing to it. If not, the mind is devaluated, and reduced to an real object. At this point, a merely passive mind must ensue. But the mind is active and co-determines what is called fact ! It co-defines the real. But an ideal subjectivity does not constitute objectivity. Although theory co-determines observation, sensate objects are not solely defined by language-games. Subjectivity refers to objectivity. § 10 Direct & Indirect Experience. α. Experience, from the Latin "experientia" or "knowledge gained by repeated trials", the compound of "ex-" or "out of" + "peritus" or "experienced, tested", is what is available through observation. This is apprehending, positing or imputing sensate and/or mental objects in the field of consciousness of the knower. Direct experience is the subjective apprehension of objects here & now. Indirect experience is intersubjective. How to conceptualize the experience of smelling a rose ? β. It could be argued consciousness itself is a mental object. However, a "prise de conscience" is something different than merely being a receptive sentient field with an apprehending center, for it involves attention, intention, introspection, autoregulation, etc. These point to the special dynamic characteristics of sentience, related to the inner, cognizing luminosity of the mind itself. The knower is not a passive mental object, but the transcendental "I think" enabling the processes of the empirical ego to occur. It is of all times and necessarily at work in every cognitive act. The knower takes actively part in every cognitive act. ∫ Empirical ego, transcendental ego, creative self and selfless nondual prehension are the levels of consciousness, its degrees of freedom. γ. Direct experience is gained in the context of reality-for-me ; from the vantage point of the first person. Its objects appear when the knower is alone (the set of observers = 1). Shared by a potentially relevant but insignificant group of observers, direct experience may turn into second person knowledge  (the set of observers = 2). Only when after considerable experimentation a significant number of involved sign-interpreters deem it so, direct experience becomes fact, i.e. a third person (the set of observers > 2) item of valid conventional knowledge. At the very moment a fact is produced, experience becomes indirect and therefore intersubjective. δ. Indirect experience involves a sharing of objects by at least more than two observers. Relevant indirect experience is limited to a small group of observers, while significant indirect experience implies high probability objects, namely those highly recurrent. The latter call for a process of validation involving repeated testing & argued (re)modelling. ε. Direct experience, the foundation of our personal sense of reality remains, from moment to moment, the cornerstone of the existential situation we find ourselves in. This is the actual mindstream or stream of consciousness with its fleeting moments of sentient activities. This mindstream determines our happiness or misery. The ongoingness of our loneliness gives definiteness to this passage of time and the connections between events correlated with it. Although highly subjective, this intimate knowledge, this direct, living knowledge (cf. "Da'at") co-determines how we perceive the knower and the known. Inner direct experience, the cultivation of attention & autoregulation, and outer direct experience, the science and art of observation, are pivotal in living our inner life well. ∫ Because the smell of a rose cannot be put into words, the most important things in our lives never depend upon reason. The more knowledge is public, the more it becomes indirect. The more knowledge is private, the more it is direct. Although direct knowledge is the root, it cannot serve to build intersubjective paradigms of valid conventional knowledge. This would lead to the domination of the view of a single (or a few) observers over all others. However, absolute truth is an object of direct knowledge. Intersubjective knowledge is always indirect, and belongs to the world of (valid) conventional information. Part of the sapient observer, it no longer merely belongs to his or her personal "Lebenswelt", but to the community of involved rational sign-interpreters. Of course, direct knowledge gathered by the single observer may influence the latter and thus assist to produce experiences shared in common. In this sense, such knowledge is, conventionally speaking, simultaneously highly relevant and highly insignificant (trivial). But because it is highly relevant, chances exist it leads to significant results. Moreover, only by way of direct knowledge does one realize the suchness/thatness of all possible phenomena. C. Towards a Critical Metaphysics. Western philosophy, starting in Ancient Egypt and Greece, cherished the quest for the unbounded, self-sufficient (substantial) ground of all phenomena, accepting a permanent core or foundation ad hoc. In Kemet, transcendence remained interdependent, and so a more henotheist, pan-en-theist view dominated. In Greco-Roman religion and philosophy transcendence was always linked to independence, of being Olympically isolated from the plebs below. This aristocratic elitism influenced the intellectuals Hellenizing Judeo-Christian theology. The absolute appeared as a Caesar, the sole "substance of substances", the One Alone, omnipotent & omniscient. This is like turning the ultimate into a creative principle, a self-powered "entity of entities". In modern philosophy, the tendency to reify served the quest for the "great formula" explaining the fundamental nature of phenomena. Either the Ideal or the Real were substantialized and used as two conflicting archaeologies of the possibility of knowledge. Their pre-critical kind of metaphysics dealt with the self-sufficient ground itself. Materialism, realism and empirism battled with spiritualism, idealism and rationalism. The resulting chaos was outstanding. These systems were unable to explain the absolute nature of phenomena in terms of process, abolishing permanency. Thanks to the transcendental study of our cognitive faculty, we no longer ground knowledge outside knowledge, but in the groundless ground of the mind itself. Given process, we no longer accept substance, and so radically relinquish inherent existence from its own side, i.e. independent & separate substance. The first question of critical metaphysics, besides keeping the demarcation with science intact, is indeed Why something rather than nothing ? Hence, the study of existence is crucial. For finding no permanent object and concluding all phenomena are impermanent, transforms critical metaphysics into a metaphysics of process. § 1 Transcendence & Interdependence in Ancient Egyptian Sapience. α. The Ancient Egyptians deliver our earliest -though by no means primitive- written evidence of extensive speculative thinking (cf. The Pyramid Texts of Unas, 2006). One may therefore characterize Egyptian thought as the beginning of speculation if not of philosophy. As far back as the third millennium BCE, they posed questions about being and nonbeing, the essence of time, the nature of the cosmos and man, the meaning of death, the foundation of human society and the legitimation of political power, etc. ∫ To read a ca.4.300 years old canonical text without any transcription errors is indeed a rare feat. β. Considering the three stages of cognition (cf. Criticosynthesis, 2008, chapter 6), two important demarcations need to be made. The first exists between ante-rationality and rationality. The second between rationality and meta-rationality. β.1 These distinctions point to the integration of decontextualization. Before the rational stage, conceptualization is either pre-rational or proto-rational, introducing unstable pre-concepts or contextualized concepts. With the advent of formal thought, and based on the gained capacity to make abstractions, theory appears. β.2 The second line is between, on the one hand, conceptuality, and, on the other hand, a-conceptuality & non-conceptuality. Mythical thought is a-conceptual. Nondual thought is non-conceptual. Between these, the concept is at hand in various forms : pre-concept, concrete concept, formal concept, transcendental concept & creative concept. Mode of Stages of Level of Concepts 1 mythical ante-rational a-conceptual 2 pre-rational pre-concept 3 proto-rational concrete concept 4 formal rational abstract concept 5 critical transcendental concept 6 creative meta-rational creative concept 7 nondual non-conceptual γ. In genetic epistemology, the cognitive process is analyzed in terms of coordination of movements, interiorization and permanency : 1. initiation : the formation of new cognitive forms is triggered by the repeated confrontation with an unexpected, novel action, a set of events radically undermining the tenacity with which acquired ideas shape a particular, limited view of the world. This is  a secure & stable architecture of habits & expectations, dramatically challenged by this significant confrontation with the novel action - no conceptualization occurs, for objects and beings are equated with their motoric coordinations (as in mythical thought) ; 2. processing : action-reflection or the interiorization of this novel action by means of semiotical factors ; this is the first level of permanency fashioning pre-concepts having no decontextualized use (as in pre-rational thought) ; 3. expanding : anticipation & retro-action using these pre-concepts, valid insofar as they symbolize the original action, but always with reference to context : the concrete concept (as in proto-rational thought) ; 4. final level of permanency : formal concepts, valid independent of the original action & context, the formation of permanent cognitive (mental) operators : the abstract concept (as in formal thought). δ. Ancient Egyptian cultural objects are always contextualized and rooted in mythical constructs and topical pre-concepts. This makes it more difficult to take note of the general features of the patchwork. But a number of strata do appear : Heliopolitan, Hermopolitan, Osirian, Memphite and Theban speculative thought can be textually identified (cf. Ancient Egyptian Wisdom Readings, 2008). These themes can be isolated because proto-rationality does have a closure, albeit one dependent on the context at hand. The "Greek miracle", the introduction of abstraction or the decontextualized use of concepts, did not preclude pre-Greek civilizations, of which Ancient Egypt was the grandest, to produce great thinkers, writers, men of science & philosophers avant la lettre. Of all peoples of Antiquity, the Ancient Egyptians were the most literary, reproducing huge quantities of hieroglyphic texts in their tombs and on the walls of their temples. Comparatively, a huge number has been recovered, but we know the majority was lost ... ∫ Two central themes run through the whole of Dynastic Egypt : (a) the balancing role of the divine king (in particular in causing the Nile to flood in accordance with Maat) and (b) the unity-in-multiplicity of the natural & divine orders. ε.  In the henotheism of Ancient Egypt, the radical ontological difference between the creating and the created pertains. The former ("natura naturans"), consisted of the light-spirits of the gods and royal ancestors (the "akhu"), residing in the circumpolar stars, untouched by the movement of rising and setting, shining permanently from above. These spirits did interact with their creation ("natura naturata") by means of their "souls" ("bas") and "doubles" ("kas"). The Bas represented the dynamical, interconnective principle, ritually invited to descend and bless creation by way of the offerings made to their Kas. These resided on Earth in the cult-statue hidden away in the dark "naos" or "holy of holies" of the Egyptian temple. Only the king or his representatives could enter this sacred space and offer the world-order ("Maat"). This exclusivity was the result of the fact gods only communicate with gods and the king was the only "Akh" or divine spirit actually embodied on Earth. So he alone could make the connection. The transcendent nature of the deities, their remote presence as well as their exclusive mode of interaction, point to a monarchic mentality, to a radical transcendence, and, mutatis mutandis, the ontological difference between, on the one hand, the eternalized world of the deities and, on the other hand, the chaotic, everchanging world of man. A division to return in Platonism. ∫ The divine "akhu" are the differential states of light, derived from Atum at the first occasion ("zep tepy"), when he was one but also two and so forth (an Ennead). Monotheism, the affirmation of singularity, is not part of Kemet. The "King of Kings" is Hidden, One and Millions (cf. The Hymns to Amun, 2002). The reification of light led to the notion of a hidden, fundamental "essence", a substance existing from its own side. While Akhenaten tried to reify light, turning it into the sole "substance of substances", Egyptian culture at large rejected this singular deification. ζ. Besides positing this substantial division of Nature in two, the Ancient Egyptians stressed their mutual dependence (cf. The Maxims of Good Discourse or the Wisdom of Ptahhotep, 2002). The procedure of weighing became a metaphor of the shamanistic exchange between the transcendent and the human world. The pair of scales involved the natural, automatic functioning of a natural law, namely "Maat", the deity of righteousness & truth born with the universe ... "Said he (Anubis) that is in the tomb :  'Pay attention to the decision of truth Papyrus of Ani, Plate 3  (note how the plummet hangs as a heart on the Feather of Maat) In this short exhortation, a practical method of truth springs to the fore : concentration, observation, quantification (analysis, spatiotemporal flow, measurements) & recording (fixating). This with the purpose of rebalancing, reequilibrating & correcting concrete states of affairs, using the plumb-line of the various equilibria in which these actual aggregates of events are dynamically -scale-wise- involved, causing Maat to be done for them and their environments and the proper Ka, at peace with itself, to flow between all parts of creation. The "logic" behind this operation involves four rules : η. The later notions of "nous" and "logos", at one time supposed to have been introduced into Egypt from abroad at a much later date, were present at a very early period (cf. The Memphis Theology, 2001 & On the Shabaka Stone, 2001). Thus the Greek tradition of the origin of their philosophy in Egypt undoubtedly contains more truth than some Classical scholars would prefer (cf. Hermes the Egyptian, 2002). Before the earliest Greek philosophers were born, the practice of interpreting  the functions and relations of the Egyptian gods philosophically already begun in Egypt. Is it impossible the Greek practice of interpreting their own gods likewise received its first impulse from Egypt ? No. Shabaka Stone : LINE 53 (Memphis Theology - hieroglyphs in red are reconstructed) : "There comes into being in the mind. There comes into being by the tongue. (It is) as the image of Atum.  Ptah is the very great, who gives life to all the gods and their Kas. It all in this mind and by this tongue." eart" may be translated as "mind" & "tongue" as "speech". The "heart of Ptah" is not yet a Greek "nous" devoid of context, i.e. an abstract, rational idea. Only concrete concepts prevail and closure is proto-rational. Rather, the contents of mind (or the meaning of the words) simultaneously move Ptah's tongue, bringing out the words actually spoken. So besides transcendence and a very strong interdependence between Heaven and Earth, Egyptian sapience attributed creative power to the spoken word, in particular in terms of giving particular form to the objects of creation. Such a "great word" was an authority ("hu") by itself, commanding powers ("heka") not to be stopped. Full of understanding ("sia"), it could only be spoken by the divine king himself and his chosen high priests. For only the king was a "Son of Re", the sole divine "akh" or spirit on Earth and so the exclusive mediator between Egypt and the gods. θ. In Ancient Egyptian literature, lots of themes animating Greek philosophy since Pythagoras are on record. However, these speculations always reflect an ante-rational mode of cognition, characterized by the total absence of theory, abstraction and the use of decontextualized (formal) concepts. This makes understanding them so difficult, but also very rewarding. ∫ Not to study Ancient Egyptian literature & sapiental discourses, is to neglect the mother of Western philosophy. It is a mistake to think philosophy started with the Ancient Greeks. Although introducing formal thinking, the Greeks were inspired by the sapience they found in Egypt. Most themes found in Greek metaphysics were part of the ante-rational speculations of the thinkers of Kemet. In particular, their views on substance ("akh"), transcendence ("pet") and interdependence ("ba" & "ka") had a profound effect on Platonism and Greek science. This does not imply Greek philosophy was "out of Africa", but neither can one claim Hellenistic speculative thought was a spontaneous find of the Greeks. Inventing the syllogism, they often got the second premise from the vast Kemetic storehouse of observation. § 2 Greek Metaphysics : Transcendence & Independence. α. Describing the particulars of the Ancient Greek mentality calls for more than youth, keen interest, opportunism, individualism & anthropocentrism. With the introduction of formal conceptual reason and its application to the major problems of philosophy (truth, goodness, beauty & the origin of the world, life and the human), a completely new kind of sapiental thinking was set afoot. Theory, linearization and abstraction were discovered and applied, giving birth to a new style. The Greek method of analysis & synthesis objectified the immediate in discursive terms, and this in a script symbolizing vowels. This Hellenizing leap forward was then offered (enforced) to the world. It was introduced as far as India, where it influenced mathematics, astrology and Buddhist iconography, but also heralded the Ptolemaic Period of Ancient Egypt (305 - 30 BCE), bringing about Hermetism (cf. Hermes the Egyptian, 2003), as well as an Egyptian (Judeo-Christian) Gnosticism. β. As Indo-Europeans, the Ionian "sophoi" pioneering Greek philosophy, had typical features of their own : • individuality / authority : a single member of humanity was no longer ontologically inferior to the group, the tribe, the clan, the nome, etc. There must be good reasons to accept any authority ; • exploring mentality : one must seek the final frontier, integrate what is the best and keep what is good ; • unique dynamic script : by the introduction of vowels, the written and the spoken word mirrored more adequately ;  • linearizing, geometrizing method : phenomena obey mathematics, and a stable, linear description prevails ; • anthropomorphic theology : the Supreme Beings are like a human family, with a paternalist figure-head. Henotheism ensues and prevails throughout Paganism. The Supreme is essentially One, but existentially Many. γ. In their ante-rational speculations, the pre-Socratics sought the foundation or "arché" of the world. This final, self-sufficient, autarchic ground had to explain existence as well as the moral order. For Anaximander of Miletus (ca. 611 - 547 BCE), the cosmos developed out of the "apeiron", or "no bound", the boundless, infinite & indefinite. This is without distinguishable qualities. Later, Aristotle would add a few of his own : immortal, Divine and imperishable. δ. The Archaic stratum of the "Greek Miracle" was layered. Steeped in Greek myth (Hesiod, Homer), pre-concepts emerged, rapidly followed by a series of concrete concepts playing a comprehensive, totalizing role in the explanation of what is at hand : • Milesian "arché", "phusis" & "apeiron" : the elemental laws of the cosmos are rooted in substance, which is all ; ε. The Ionians, largely basing themselves on myth, introduced the first pre-concepts & concrete concepts. Thanks to Pythagoras (ca. 580 BCE - ca. 500) and the Eleatics, the a priori dawned. A new mathematics, logic & rhetoric were born. The term "philosophy" coined. ε.1 After the Persian Wars (449 BCE), starting with the Sophists, Greek philosophy displayed the rule of reason & the subsequent liberation of thought from all possible contexts. Abstraction could come into play. The subsequent relativism of the Sophists is rejected by Socrates (470 - 399 BCE). He sought universal, eternal truths by way of dialogue, criticizing established views and inviting his listeners to discover this truth by the use of their own minds. For Socrates, the practice of philosophy helps to understanding the role of the human being as part of the "polis", a designated community. Plato, Xenophon & Aristophanes portray an original, unique, civilized but non-conformist individualist, ironical, brave, dispassionate and impossible to classify, belonging to no school. ε.2 This exceptional individual embodied the ideal of Greek philosophy. • philosophy is a radical, uncompromising, authentic search for understanding, insight & wisdom ; • philosophy is never an intellectual, optional "game", but demands the enthusiast arousal of all faculties, addressing the "complete" human and giving birth to a practice of philosophy ; • philosophy equals relative, conventional, approximate truth, but never absolute truth. Greek philosophy, accepting intuition, never eliminates reason. ζ. The classical systems of Plato (428 - 347 BCE) & Aristotle (384 - 322 BCE) are a reply to the relativism of the Sophists. Protagorian relativism is rejected. To refute this scepticism, i.e. the unwillingness to accept there is only "doxa", opinion, not "aletheia", truth, Classical Greek philosophy opts for substantialism, accepting some permanent, static unchanging, self-sufficient core to exist in changing things. This core being its substance. This essence ("eidos") or substance ("ousia") may be subjective or objective. ζ.1 As the ideal, it is a subject fundamentally unmodified by change. This higher subject is viewed as an inner, inherent ground acting, from its own side, as the common support of the successive inner states of mind. ζ.2 As the real, the substance of a thing is deemed the stuff out of which it consists, explaining the manifestation of the extra-mental, objective, kicking world "out there". Both need to be criticized. η. In Western substantialism or essentialism, the substance of A is the permanent, unchanging, eternal, final, self-sufficient ground, foundation, core or essence of A, something existing from its own side, never as an attribute of or in relation with any other thing. ∫ If I think my wife (husband) is real, how to make love to her (him) ? If I think my wife (husband) is ideal, how to remain serious ? θ. Both Plato & Aristotle are substantialists and concept-realists. They seek a self-sufficient ground and both root our concepts in an extra-mental reality outside knowledge. Plato cuts reality in two qualitatively different worlds. True knowledge is remembering the world of ideas. He roots it in the ideal. Aristotle divides the mind in two functionally different intellects. To draw out & abstract the common element, an "intellectus agens" is needed. He roots knowledge in the real. ι. The foundationalism inherent in concept-realism seeks permanence but cannot find it. It therefore ends the infinite regress ad hoc and posits something to be possessed by the subject. This is either an object of the mind (like an active intellect or an eternal soul) or an object of the extra-mental world (the permanent stuff of reality). Greek concept-realism seeks substance ("ousia") and substrate ("hypokeimenon"). This core is permanent, unchanging and exists from its own side. κ. In concept-realism and foundationalism, truth is transcendent, independent and permanent (eternal). As soon as positing a fixed & static object is habitual, the mind arrests its primary critical task to continuously distinguish between a substance-based and a process-based view on sensate and mental objects. Avoiding the first, an infinite potential and dynamic transformation due to interdependence is made possible. Entrapped by the illusions displayed by truth-concealers, ever-changing display and the rise of multiplicity are impossible. ∫ Positing substance splits the stream, while accepting process makes way for the flow. The "Greek miracle" escaped the narrow confines of the contextual thinking characterizing the way of Antiquity. Formal, theoretical thought, individualism and a dialogal attitude would revolutionize speculation and give birth to philosophy as a rational way to understand the world as a whole. The pre-Socratics introduced fundamental concepts like "arché", "logos" & "aletheia". The Eleatics heralded the a priori, while Democritus focused on the a posteriori and the Sophists introduced the pragmatism & relativism of the "anthropos". The classical systems of Plato and Aristotle tried to bring these together within the framework of a generalizing concept-realism, grounding the truth of concepts in either a transcendent ideality or in the world of the senses respectively. Substantialism (essentialism) were deemed necessary to explain the possibility of knowledge. Ontology defined epistemology. § 3 Metaphysics in Monotheism and Modern Philosophy. α. Greek rationalism and concept-realism influenced Egyptian thinking, triggering Hermetism (cf. Hermes the Egyptian, 2003). The "Greek miracle" had a decisive impact on Judaism, as it would have on Christianity and Islam. At first, Platonism and neo-Platonism prevailed, but then Aristotelism took over. Greek substantialism overcrowded monotheist theology. What started as an apology serving the spread of a Semitic religion of the desert among educated urban Greco-Romans, ended up as a fundamental theology saturated by the static framework of Classical Greek thought, inviting the identification of the Supreme Being of monotheism with the Platonic "substance of substances", the "summum bonum" or the Peripatetic "Prime Mover". Thus the "Living God" of revelation, in touch with His Creation, was transformed into a "Caesar", a Supreme Being, independent & self-sufficient, the One Alone, the Monad or "Absolute of absolutes" looking down on His creatures. Omniscient & omnipotent, this "God of Gods" could hardly entertain any interest in humanity, except in terms of a strict Greek analysis of the rules and obligations laid down in His scripture ... ∫ The "religions of the book" derived a view of the absolute using an exegesis based on Greek metaphysics. Such a view serves well all figures of authority trying to fool other men into "spiritual" servitude. β. Greek logic forced the implicit theomonism of the Torah into a monotheism. This view was unable to embrace the bi-polar nature of the Deity. Indeed, "YHVH ALHYM", revealed to Moses on the Horeb, was both singular ("YHVH") and plural ("ALHYM"). This "coincidentio oppositorum", also found in Ancient Egyptian sapience, in particular in the transcendent function of Pharaoh, a shamanist king of sorts, comes nearer to the direct experience of the Divine "face to Face". In essence, God is ineffable (singular), but existentially He is "Elohîm", and so plural. This is pan-en-theism and theomonism, but not strict monotheism. What happened ? The Greeks translated the Hebrew Name of God as "theos" (singular), eclipsing the Divine Presence ("shekinah") given with the plural "Elohîm". In this way, Judaism got Hellenized, triggering countless fringe counter-movements (cf. the Qumrân-people, the Zelotes, the Johannites, the Jesus-people etc.) γ. The issues related to the Persons of the Holy Trinity were tackled with the Greek triadic logic of "monos" (manation), "proodos" (emanation) & "epistrophe" (return). The stringent nature of both Greek formal logic and concept-realism caused the dogmatic breach between Orthodox and Roman Trinitarism, for Rome allowed the Spirit to also proceed from the Son (cf. the "filioque" - A Christian Orthodoxy and the Holy Spirit, 2004). The conceptual difficulties related to the nature of Jesus Christ, to be named "God" in the same measure as His Father -with whom He is consubstantial- but also fully & perfectly human, gave rise to a rich tapestry of conflicting views. These were elaborated using the full measure of the possibilities given by traditional formal logic. They caused many heresies (alternative choices) and doctrinal problems. These induced violence, both mental and physical. The direct experience of the "Living Christ" was thus replaced by a theological system, a monolith intended to rule the world, spiritually & worldly. The spiritual impetus of the Egyptian Hermits in Christ was soon replaced by monastic orders protected by walls and controlled by the Episcopate. δ. The Koran sees with two eyes. With the left, the remote, essential, substantial side of "Allâh" ("The God") is seen. This leads to the theology of the law. With the right, the near, actual presence of "Allâh" is experienced, bringing in rapture, beauty, poetry and all possible enjoyments. This leads to the theology of spiritual emancipation. After the death of Muhammad, the Prophet of Islam, peace be with him, Islam spread out and assimilated Greek science, logic & philosophy. In a few centuries, it had gotten Hellenized and even integrated the Hermetism of Harran ! The logic of remoteness, largely and subreptively based on the model of The One of Plotinus, gave weight to the idea of predestination. The overpowering, Imperial interpretation of the omniscient & omnipotent status of "Allâh", favoured by jurists, scholars & intellectuals alike, made any kind of intimate encounter with the Divine suspicious (as in Sufism). Due to the Greek "privatio", the world and man were deemed without self-sufficient substance, and hence, with the turn of Greek logic, The God is the only one truly in charge of Being, exception made for the Perfect Man, an embodiment of the 99 Names of "Allâh", personified in the person of Mohammed. Again the logic of Greek formalism had embanked a living stream, causing strong oppositions and theological schisms. Politically (cf. Sunna versus Shi'a), as well as hermeneutically (cf. Sharia versus Sufism), tensions were and are too often coupled with disrespect, brutality & violence. Because the power of formal logic is nowhere granted more privilege than in Islamic theology, the danger to be entrapped in radical dogmatism & fanatism is outstanding. ∫ Monotheist theology remains a monolithic mastodon, displaying a gigantism slowly brought down by the discoveries of science and the ongoing creative advance of the human mind. The impact of the monotheist concept of God on pre-critical metaphysics was unmistaken. In Scholasticism, philosophy merely served theology, so the link is obvious. However, it took modern philosophy also quite some time to abolish the substantial God. • Humanism : (a) non-radical, nominalist denial of the conceptual realism of Scholasticism, (b) observation & experiment, (c) bricoleur-mentality deriving from the individual & (d) focus on solving practical problems ; Although the authority of religious potentates in non-spiritual matters comes under fire, the existence of a Supreme Being is not denied, neither is substantialism, trying to identify a permanent "core" in phenomena. Disclosing the plan or mind of the omnipotent & omnipresent God was no small motivation. • Rationalism of Nature : (a) mathematics of the final foundation of knowledge in a clear, distinct, continuous, certain & absolute self-sufficient ground, the final truth of which is to be intuitively grasped, (b) systematic observation & formalization of facts, (c) focused on a closed, knowledge-founding & dualistic worldview & anthropology ; For Descartes, God guarantees truth. Classical rationalism maintains an abstract concept of the Supreme Being, still viewed as existing from its own side, inherently. Both the ego cogitans, the extended things and God are substances. Spinoza goes a step further, and defines God as the sole substance with an infinite number of attributes (of which humans only grasp two). Leibniz also maintains the God of substance, adding a theodicy stressing He created the best possible world ... • Empirism of Nature : (a) mathematical certainty & impressions are the foundation of knowledge (phenomenalism), (b) systematic observation & its formalization, (c) sceptic agnosticism undermining positive science, scholastic & natural metaphysics alike ; Empirists like Locke and Hume no longer wish to incorporate non-sensate objects like God. They introduce the first step in an increasing cleavage between science and the God of revelation. No longer needing "this hypothesis" (Laplace), they restrict the domain of valid knowledge to statements incorporating empirical data. God slowly fades to the background and becomes a private matter. • Criticism : (a) a systematic, transcendental investigation of the objective boundaries of "Verstand" (mind) and "Vernunft" (reason) operating in the subject of knowledge, (b) the elimination of the ideas of God, Soul & World as constitutive for knowledge, (c) Copernican Revolution : the human mind imposes Nature its own a priori categories, (d) focused on a new, scientific (immanent) metaphysics not moving beyond the boundaries necessary for mind & reason to function properly ; Even Kant, although ousting God from the field of pure reason, retained the concept of a substantial God, reintroduced as a postulate of practical reason ! This divide between theory & practice as well as unsolved theoretical problems, triggered idealism. Misunderstanding Kant, German Idealists like Fichte, Schelling & Hegel bring about a reactionary revival of Divine substance. Introducing a dynamism, Hegel tries to incorporate the idea of historical change. It eludes him one cannot truly couple substantialism with dynamism, except by violating the "dead bones" of formal logic ... the result being a philosophy pitying facts. • Technologicism : (a) metaphysics & theology are negative values, facts are positive (Comte) and science is able to work in a way not involving subjectivity at all (Weber), (b) sense-data are the foundation of knowledge & the emergent technological materialism (Russell), (c) a definite movement towards a new, secular scientific class fashioning their logical-positivist monolith dictating atheism (or agnosticism) and reductionist humanism ; In the Romantic Age, while God is finally driven out from the edifice of Newtonian science, we witness an exotism introducing Eastern ideas of the Divine and interest in fringe subjects (cf. psychic research, occultism, Egyptomania). In philosophy, a protest movement unfolds rejecting the supreme role of reason. Nietzsche correctly foresees the end of the Platonic God ... Technology based on Newtonian science is the new "Holy Grail". • Institutionalism : (a) rapid, massive global divulgation of closed Carnot systems, (b) valid knowledge is tested & consensual : a scientific elitism with its given discourses, conventions, parlances and local logics - science as the servant of industry, the military, the "powers that be", (c) focused on the illusionary metaphysics of permanent scientific discovery & material growth, (d) denial of the role of the First Person Perspective in science, (e) negation of the results of observational psychology and the cult of sense-data, instrumentalism & strategic communication ; With materialism, physicalism, scientism, logical positivism, instrumentalism and the like, the subject of experience is reduced to the physical stuff of the brain, and believe in God has become silly & retarded. Metaphysics is no longer a valid subject of inquiry. This new paradigm conquers the Western world and is institutionalized. Opposing views are disposed of as useless and boycotted. • Fossilism : (a) globalization of egology, destruction of ecosystems & social depravity, (b) rapid moral degeneration, corrupt status quo, the rise of counter & anti-cultural movements, the institutionalization of incompetence, massive global squandering of material resources, (c) virulent nihilism, death-art, the cult of irrationalism & the rise of posthumous modernism, technocratic science, militarism, narcissism & consumerism, (d) total & global misunderstanding of the needs of humanity & its survival, (e) collective forms of psychosis & hysteria, rise of violence, insecurity & ecological catastrophes, (f) fall of communism and the assimilation of socialism and ecology into late capitalism and its inherent Plutocracy : egoism "enlightened" by black light. ε.1 Modernism collapsed as soon as the "grand tales" invented by reductionism, materialism & physicalism were found to be defunct. Postmodernism introduced a "margin", a sidetrack deconstructing these main ideologies. The days of foundationalism, so cherished by modernity, are finally over. Replacing the substantial God with a physical self-sufficient ground did not lead to the expected social, political & economical harmonization, quite on the contrary. It destroyed the ecosystem and brought about a new world disorder. There is no "invisible hand" regulating late capitalism. Modernity ends in chaos & more suffering for all. Physical poverty and a psychological poverty-mentality abound. Who has not been driven into the cage of alienation ? ε.2 Hypermodernism will truly begin when science realizes it has refuted too much. Relativity, quantum, chaos and string reintroduce the subject and a renewed interest in criticism brings about a "linguistic turn". Even the absolute is reintroduced, albeit not as the substantial God. The way Nature is questioned influences the way Nature responds. Metaphysics cannot be banished but needs to be redefined. The advent of the WWW ends the restriction of information, assisting the divulgation of a multi-cultural and global worldview. But this hypermodernism has not yet reached society at large. Forced by economical & ecological catastrophes, a global change and the advent of a New Renaissance may be expected. ζ. The death of the Greek version of the Divine is not the end of the concept of the absolute, nor of the possibility of an absolute process. God* as conceived here is no longer before or beyond the world, but with all entities. In this view, God* is both impersonal (transcendent, primordial) and personal (immanent). Sharing many features of the semantic field of the Supreme Being as found in the monotheisms expounding God, It differs radically on a few crucial points : this ultimate, merely sufficient ground, is not the "substance of substances", but a Divine Process. This is both impersonal and personal, both a He, a She and an It, merely by convention addressed as "He", "Him" and "His". Moreover, God* is not omnipotent, nor a Creator ! η. The God* of process is a non-spatiotemporal actual entity giving relevance to the realm of pure possibility in the becoming of the actual world. Both potential & actual, He (She, It) is the meeting ground of the actual world & pure possibilities. Together, the realm of abstract possibilities and the actual world constitute Nature. ∫ The "God of the Philosophers" is not a God of revelation, except if the latter is ongoing. S/He is not a God beyond Nature, but with Nature. Greek substantialism, being the intellectual framework of the educated elite, became part of the theologies of the three monotheisms. God was the "substance of substance", a Supreme Being who created the world "ex nihilo". Forced by the necessities of formal logic, these theologies incorporated the problems inherent in every formal system, namely completeness & consistency. Following Plato & Aristotle, the God of monotheism became a substantial God, self-referential & autarchic, an absolute existing inherently from its own side, isolated and independent from its own creation. Unchanging, such a God could not accommodate history and be "Emmanuel", a "God-with-us". This ultimate God-as-substance was believed to be the ontological "imperial" root of all possible existence. This God is distinct (another thing or "totaliter aliter") and radically different (made of other kind of "stuff" as the world). By identifying the mind of God with Plato's world of ideas, the Augustinian Platonists had to exchange Divine grace for enlightened, intuitive reason. Thomist Peripatetics introduced perception as a valid source of knowledge and so prepared the end of fundamental theology, the rational explanation of the "facts" of revelation. For Thomas Aquinas, the relation between God and the world is a "relatio rationis", not a real or mutual bond. This scholastic notion can be explained by taking the example of a subject apprehending an object. From the side of the object only a logical, rational relationship persists. The object is not affected by the subject apprehending it. From the side of the subject however, a real relationship is at hand, for the subject is really affected by the perception of the object. According to Thomism, God is not affected by the world, and so God is like a super-object, not a subject. The world however is affected by this object-God. The relationship between God and the world can therefore not be reciprocal. If so, the world only contributes to the glory of God ("gloria externa Dei"). The finite is nothing more than a necessary "explicatio Dei". This is seen as the only way the world can contribute to God. This view contradicts the notion of the "Living God", a Deity part of history and so influenced by the free choice of sentient beings. § 4 The Fundamental Question : Being or Knowing ? α. Driven by the archaic need to find a self-sufficient ground, an "arché", the Greeks first unveiled the foundation and then explained how knowledge is possible. α.1 Plato posited a world of ideas, in all ways better than the world of becoming, and derived his epistemology of remembrance from the radical division ("chorismos") between both. The world of becoming, ever changing, multiple and diverse, could not serve as a self-sufficient ground for the absolute, unchanging truth he sought. Likewise, Aristotle, although rejecting the existence of two worlds, would first explain how all things depend on four causes (material, efficient, formal & final), and only then explain how the passive & active parts of the intellect functioned. α.2 In Greek concept-realism, the theory on being (ontology) acted as an archaeology for the theory on knowledge (epistemology). One seeks a place ("epi") on which a subject might stand ("histâmi"). Being came before knowing. β. In the Middle Ages, the apory between exaggerated realists ("reales") and nominalists ("nominales") implied a logico-linguistic transposition of the ontological apory between Plato and Aristotle. Indeed, the so-called "battle of universals" transposed Greek concept-realism, nurturing the division between "ante rem" and "in re". Universals are either before or in the realities of which they are abstractions. The extraordinary contribution of Abelard (1079 - 1142) to epistemology is his avoidance of the apory by introducing a third option : 1. universale ante rem : the universals exist before the realities they subsume : Platonism ; 2. universale in re : the universals only exist in the realities ("quidditas rei") of which they are abstractions : Aristotelism ; 3. universale post rem : universals are words, abstract universal concepts with a meaning, given to them by human convention, in which real similarities between particulars are expressed. The latter are not "essentia" and not "nihil", but "quasi res". Abelard's solution calls for a crucial distinction : universals are not real, but they are nevertheless words (real sounds) with a significance referring to real similarities between real particulars. Because of their meaning, they are therefore more than "nothing". The foundation of his particular nominalism is "the real" as evidenced by similarities between objects, whereas the "reales" supposed an ante-rational symbiosis or a symbolical adualism between "verbum" & "res", between Platonic ideas and material objects ("methexis"). With his solution, Abelard paved the way for Hume (1711 - 1776), for this radical empirism accepted -without being to able to explain them- similarities between sense-data. ∫ Too much empirism betrays the necessity of an active mind. Too much mentalism hampers the sincerity with which we hold things to be true. γ. With William of Ockham (1290 - 1350), concept-realism is finally relinquished. The foundational approach is also left behind. The nominal representations arrived at in real science are only terministic, i.e. probable. They concern individuals, never extra-mental "universals". Real science deals with true or false propositions referring to individual things. These empirical data are primordial and exclusive to establish the existence of a thing. The concept ("terminus conceptus" or "intentio animæ") is a natural sign, the natural reaction to the stimuli of a direct empirical apprehension. Rational science is possible, but it does only concern terms, not universal substances. With Ockham, the first inkling of what would become the Copernican Revolution is felt : one first needs to study the possibilities of knowledge before making statements about being. Our cognitive apparatus (the tool) is to be thoroughly known before launching ontology. Knowing is before being. ∫ Franciscan logic is simple : less is more. In an effort to lessen their feelings of insecurity and to explain how to control the multiple, non-linear, chaotic world (of becoming), Egyptian & Greek sages alike sought a "hypokeimenon", in other words, a singular super-thing underlying every possible other thing. Their minds favoured an isolated, self-dependent & unchanging absolute self-sufficient ground : solid, permanent & separate. They could not conceive the absolute as dynamical, interdependent & other-dependent. These philosophers placed being before anything else. These "saa", "sophoi" or sages considered it their privilege to make statements about this final self-sufficient ground. Different "schools" arose. In Egypt these remained contextualized (Memphis, Heliopolis, Hermopolis, Abydos, Thebes) and so dependent of the "Great House", the rule of the Solar king to guarantee unity (in plurality). In Greece, while the tenets of each school were reasonable, bringing them together merely generated contradictions, inviting the scorn of the sceptic and the sophist. This in turn motivated system builders like Plato, Aristotle & Plotinus. Although the ontological intent may be laudable, especially as a quest for a totalizing, comprehensive world view, metaphysics cannot but fail if one does not first consider the instrument with which this captivating pretence of total overview is made, namely the mind. Indeed, all statements about the absolute nature of phenomena always happen as part of the field of consciousness of those who make the claim. One cannot step outside the mind to witness how things are without it. The trick of Baron von Münchhausen, lifting himself up by pulling at his own hair, may delude those ill-prepared, but never fools attentive thinkers. Imputing being before knowing is the way of pre-critical philosophy. First studying the mind and then making generalizing statements about the common features of all possible phenomena is what is at hand. § 5 Precritical Metaphysics : Being before Knowing. α. Remigius of Auxerre (ca. 841 - 908), taught any species to be a "partitio substantialis" of the genus. The species is also the substantial unity of many individuals. Thus, individuals only differ accidentally from one another. All beings are modifications of one Being. A new child is not a new substance, but a new property of the already existing substance called "humanity" (a flair of monopsychism is felt). β. When being is posited before knowing, an implicate symbolical adualism between the name (or word) and its reality or "res" must be at hand. Words are not merely "flatus vocis", but refer to an extra-mental reality outside them, either as an idea or universal existing in another world or as a universal realized in individuals in this world. This semantic adualism baked into the fabric of reality backs the ontological "proof" of the existence of God (cf. Anselm of Canterbury - Criticosynthesis, 2008, chapter 7), but can also be found in Heidegger after "die Kehre". In strict empirism (cf. David Hume), this natural, pre-epistemic bond between words and what they represent is eliminated, but then it becomes unclear how one is able to identify any common ground between sense-data on the basis of sense-data alone, triggering scepticism. ∫ To think the transition between words and their reality as seamless is to accept the unchecked psychomorphic activity of ante-rationality. γ. Besides the dangers of dogmatism (identifying a common ground between words and reality ad hoc) and scepticism (denying any common ground, plunging epistemology in absolute relativism ad hoc), promoting being before knowing, and so positing entities before analyzing the possibilities of the cognitive tool attending them, a multiplication of self-sufficient grounds ensues. This absurdity, already apparent in classical Greek thought (namely the divide between Plato & Aristotle), returns in Scholasticism as the schism between "reales" and "nominales" and can also be found in the Modern Age as the conflict between empirism and rationalism. This was the scandal keeping Kant awake at night ... How to erect a stable foundation for philosophy ? One as solid and universal as Newton's law of gravitation ? This cannot be only a matter of choice (this-or-that conjectured self-sufficient ground), but must be based on a transcendental logic necessitating the principles of conceptual rationality itself. ∫ First we learn how to use a tool, then we use it. But we learn to use it by using it and so when using it we merely perfect our use of it. Not only does essentialist concept-realism conjure a world of static models tainted by apory, but it displays the naiveté of believing anything true can be acquired by stepping outside the limitations imposed, in the first place by cognition itself, but also by conceptual reason and its empirico-formal propositions and their paradigmatic synthesis. The conviction of having found an Archimedean stronghold blinds reason, no longer able to argue its over-the-top imputations, except ad hoc. Two extreme positions are therefore to be avoided : "being" is not to be identified with a world of ideas "in here", nor with the real world "out there". What being is in an absolute sense, as transcendent metaphysics clarifies, is no longer an object of conceptual reason. Relative being only affirms the existence of a set of features of actual occasions. Non-existence is the absence of such. Full-emptiness affirms every phenomenon, although other-dependent, lacks substantial existence of any kind. Empty of self, it is full of the others. § 6 Critical Metaphysics : Knowing before Being. α. With his "Copernican Revolution", Kant (1724 - 1804) completed the self-reflective movement initiated by Descartes, focusing on the subject of experience. Integrating the best of rationalism and empirism, he avoided the battle-field of the endless (metaphysical and ontological) controversies by (a) finding and (b) applying the conditions of all possible conceptual knowledge. β. An armed truce between object and subject is realized. Inspired by Newton (1642 - 1727) and turning against Hume, Kant deems synthetic propositions a priori possible (Hume only accepted direct synthetic propositions a posteriori). Contemporary criticism no longer goes as far as Kant. Empirico-formal statements are fallible and relative. γ. There is a categorial system producing scientific statements of fact. These are always valid and necessary (for Hume, scientific knowledge is not always valid and necessary). This system stipulates the conditions of valid knowledge and is therefore the transcendental foundation of all possible knowledge. δ. Unlike concept-realism (Platonic or Peripatetic) and nominalism (of Ockham or Hume), critical thought, inspired by Descartes, is rooted in the "I think", the transcendental condition of empirical self-consciousness without which nothing can be properly called "experience". This "I", the apex of the system of transcendental concepts, is "for all times" the idea of the connected of experiences. It is not a Cartesian substantial ego cogitans, nor a mere empirical datum, but the empty, formal condition accompanying every experience of the empirical ego. Kant calls it the transcendental (conditional) unity of all possible experience (or apperception) a priori. Like the transcendental system of which it is the formal head, it is, by necessity, shared by all those who cognize. ε. "What can I know ?" is the first question asked. Which conditions make knowledge possible ? To denote this special reflective activity a new word was coined, namely "transcendental". This meta-knowledge is not occupied with outer objects, but with our manner of knowing these objects, so far as this is meant to be possible a priori, i.e. always, everywhere and this necessarily so. Kant's aim is to prepare for a true, immanent metaphysics, different from the transcendent, dogmatic ontologisms of the past, turning thoughts into things. ζ. The transcendental system of the conditions of possible knowledge (or transcendental logic) is a hierarchy of concepts defining the objective & subjective ground of all possible knowledge, both in terms of the synthetic propositions a priori of object-knowledge (transcendental analytic covering understanding), as well as regarding the greatest possible expansion under the unity of reason. These transcendental concepts are not empirical, but are the product of the transcendental method, bringing to consciousness principles which cannot be denied because they are part of every denial. They are "pure" because they are empty of empirical data and stand on their own, while rooted in (or suspended on) the transcendental "I think" and its Factum Rationis. η. In classical (Kantian) criticism, reason, the higher faculty of knowledge, is only occupied with understanding, while the latter only processes the input from the senses. Reason is deemed not to have an intellect to inform it ! No faculty higher than reason ! In hypermodern criticism, meta-rationality, intuition or "intellectual perception" (in the form of nondual cognition) are not denied a priori. The creative objects of creative thought, as well as the ineffable dual-unions of nondual cognition are accepted and explained. This links epistemology with aesthetics and art as well as with mysticism, as clarified by transcendent metaphysics. ∫ Classical criticism still accepts substances. Hypermodern criticism banishes the archaeology of truth, beauty & goodness. Nowhere does it find self-powered entities ... Criticism seeks a hierarchy of concepts defining the objective ground of all possible thought, knowledge, cogitation, apprehension, imputation, attribution & mental grasping ... This object is not found in a self-sufficient extra-mental ground, but in the conditions & determinations of the mind itself. Transcendental logic deals with the general dualistic set of principles ruling the possibility of cognition in all its modes. Epistemology explains how (valid) conceptual knowledge is possible and produced. The issue is reduced to conceptuality, present in only four out of the seven modes (cf. the proto-concept -or concrete concept-, formal concept, transcendental concept & creative concept). In the first two modes (mythical & pre-rational) the concept is not yet formed, while in the last (nonduality) it is radically transcended (left behind). Criticism integrates some of the findings of genetic epistemology and tries to bring out the full scale of stages & modes of featuring knower, knowing & known. The development of this faculty of cognition runs in three fundamental stages, called "ante-rational", "rational" & "meta-rational". Seven modes of cognitive functioning ensue : mythical, pre-rational & proto-rational cognition (for ante-rationality), formal & transcendental cognition (for rationality), creative & nondual cognition (for meta-rationality). Only by thoroughly understanding the instrument, while it performs all possible cognitive activities, is it possible to assess the capacity of our tool, the mind. Both ante-rationality & meta-rationality are interesting stages. They are necessary in an extensive view. But classical criticism focused on the rational stage. Ante-rationality shows how pre-formal concepts operate. It makes us appreciate these concrete concepts may offer a strong sense of closure and thus endure for millennia. Meta-rationality invites us to push the limits of reason, allowing it to access higher possibilities with increasing degrees of freedom. Investigating the extremes makes the Middle Way of reason a suitable path. Not eclipsing the poles allows reason to spread out it wings as far as possible. D. Valid Science & Critical Metaphysics. Together but apart, valid science and critical metaphysics complement each other. Without valid science, speculative efforts may wander away from conventional truth. The totalized views thus arrived at will not easily connect with the mainstream. How can they be helpful, assist, inspire or accommodate care for others ? Without critical metaphysics, science no longer strives to seek beyond its furthest horizon. It turns all of its attention to analyze further and lacks a general, synthetic view inviting new vistas & possibilities. Speculating while assuming radical nominalism purifies metaphysics from making absolute statements about phenomena. Making the case for universal interdependence and absence of substance, critical metaphysics invites the mind to purify concepts by means of concepts. This ultimate analysis is not the cause of nondual cognition, but merely eliminates the reifying tendency of the mind, positing substance or x. Once this tendency is completely eradicated (as in ¬ x) the mind is totally healed from any delusion. It no longer sees a darkened rope as a snake, but things as they are. This suchness/thatness of phenomena is a datum of nondual cognition, although not in the sense of conceptual knowledge. The direct experience of this absolute reality is ineffable, but its impact on the mind decisive and so highly relevant. A mind impressed by this will comprehend interconnectedness more clearly, with more width and depth. This indirect role of transcendent metaphysics on immanent speculations cannot be underestimated. Because metaphysics is always present in the background of testing & argumentation, and so cannot be eliminated, a critical positioning is necessary. Metaphysics is not foundational. It does not act as an archaeology for correct logic, truth, beauty & goodness. Nor is its ontology more than a current & conventional picture of the world lasting as long as its constituting elements remain valid. Metaphysics is not testable. It is therefore not a science, but a heuristic instrument of science, a "speculum" reflecting a totalizing, comprehensive worldview or apprehension of the whole and an "ars inveniendi". Metaphysics is not irrational. Only two criteria for validity remain : correct logic and argumentation. Scrambled speculation and/or unarguable positions define invalid metaphysics. Which logic is invoked and how the principles and their developments are argued determines the weight of any metaphysics. ∫ As phenomena are complex, so is metaphysics. Mistrust easy answers even if sometimes they do exist ! § 1 Transcendental Logic of Cognition. α. No act of cognition without, on the one hand, a transcendental object, appearing as an object of knowledge (what ?), and, on the other hand, a transcendental subject or subject of knowledge (who ?), a member of a community of intersubjective sign-interpreters making use of language. Transcendental logic, ruling all possible cognition, captures the fact of reason as the necessary product of two irreducible & entangled sides : • the transcendental subject : the thinker, the one thinking, a knower as it were possessing its object ; • the transcendental object : what is thought and so placed before the subject as the known. The transcendental subject is not a closed, Cartesian substance or ontological "ego cogitans". It is more than a mere Kantian unity of apperception accompanying all cogitations. Intersubjectivity, language-games, the use of signals, icons and symbols by persons and groups, enlarge the scope of the transcendental subject, appearing as a community of language users, both in terms of personal membership(s), and the actual discourses, as well as their historical tradition (the magister of past, successful games). Concrete discourses are regulated by absolute ideality (the Ideal). The transcendental object is not a construct of mind, a shadow or a reflection of merely ideal realities. Although the direct evidence of the senses is co-determined by the observer, objective knowledge is possible and backed, so must we think, by the extra-mental or absolute reality (the Real). ∫ Without a known, one cannot posit its knower. Without a knower, one cannot possess a known. β. In conceptual cognition, the Factum Rationis must be a concordia discors, for both sides ought to be kept together but apart. They engage in communication to achieve a common goal : correct (conventional) thinking & knowing, i.e. the production of valid or justified empirico-formal propositions. γ. In mythical & nondual cognition, the duality identified by transcendental logic is present but special. While emphasizing the object, mythical cognition confounds object & subject. It is not reflexive, without a trace of self-reflection and usually focused on some grand object. At the other end of the spectrum of cognition, nondual thought is the pinnacle of reflectivity and reflexivity ! Being non-conceptual, it merely escapes the reification of the duality of the fact of reason, but not the duality itself. Suppose duality would be superseded, i.e. turned into a higher unity. Then nondual cognition could not be an act of cognition, for nonduality would be monadic. Although dualistic, nonduality implies a dual-union. ∫ Duality does not pose problems, but its reification does. The absolute experience of duality is the experience of nonduality. δ. Critical thought raises the reflective to the reflexive. Pre-rational concepts anticipate to stabilize and become concrete concepts offering mental closure. The pre-concept, because of its semiotical entrenchment, introduces the first inkling of reflectivity. Pre-concepts & pre-relations are dependent on the variations existing between the relational characteristics of objects and can not be reversed, making them rather impermanent and difficult to maintain. They stand between action-schema and concrete concept. With proto-rationality, the ante-rational phase of the genesis of the cognitive faculty finds closure, harmonizing mythical traditions, original concepts and their concrete realization in cultural objects. Formal thought liberates the self-reflective nature of cognition from the confines of contexts, introducing abstraction, theory and free dialogue. This reflective process is carried through and refined by transcendental cognitive activity, laying bare the principles, norms & maxims of conceptual reason. Producing (hyper)concepts, creative cognition brings the mind to its largest possible extension. It does however not observe its own natural state, but the own-self and its complex creative hyper-thoughts. Emptied by ultimate logic, the former creative mind may directly experiences its own nature. The nature of mind is ultimate reflectivity & reflexivity. In other words, the absolute mind fully knowing the absolute object. The nature of mind is (a) self-clarity, (b) primordial absence of conceptualization, (c) spontaneous self-liberation of mental flux, (d) unobscured self-reflexion and (e) impartiality. The transcendental system -laid bare by a reflection on the conditions of all possible cognition- is before the facts or a priori. It makes clear the intra-mental mechanism of the knowing mind, existing on the side of the transcendental subject only. Its principle is not monadic but dualistic. All cognitive acts involve a subject (the object-possessor) and an object (the subject-possessed). The role of the subject is crucial : it alone possesses the object, not vice versa. In mythical cognition and nondual cognition, non-conceptuality prevails, either by innate confusion or by thorough elimination (purification) respectively. In nondual cognition, object and subject form a dual-union, a special condition allowing a direct experience of full-emptiness, the unity of the absolute nature of all phenomena (emptiness) with the universal interdependence between all phenomena (fullness). The transcendental system works with principles. In all acts of cognition, the Who ? and the What ? are present. The subject refers to a mental "prise de conscience" of an object leading up to opinion, idea, hypothesis and theory. Without a subject, how can anything be known ? The object is an extra-mental reality. It has a decisive role to play : to tell us which possibility eventuates. It informs about the transition from mere potentiality (or possibility) to actual occasion (or concreteness). Is it this or that ? Without an object, the subject cannot be posited either. § 2 The Correct Logic of Scientific Discovery. α. The propositions of science are (a) empirical, (b) formal and (c) in that order. They are empirical because without sensate objects the extra-mental cannot be established. They are formal because without mental objects nothing can be labelled. Empirico-formal statements are foremost empirical because science is fundamentally preoccupied with the theory-independent side of facts, i.e. think about Nature without thinking about thought. All possible scientific knowledge is in the form of empirico-formal propositions. These are terministic (probable) but in all cases fallible and thus relative conditions & determinations. ∫ Science is about knowledge merely working for a while. β. Epistemology is a normative discipline, bringing out the principles, norms and maxims of valid conceptual knowledge. This empirico-formal information is true in the eyes of all involved sign-interpreters. The rules of valid conceptual cognition must be used in every correct cogitation producing valid conceptual knowledge. This is conventional knowledge, concealing the nature of phenomena, namely their lack of existence in and of themselves. Indeed, this worldly knowledge displays sensate objects as independent of and separated from the consciousness apprehending them. γ. The principles of cognition in general are given by transcendental logic, the norms of conceptual cognition are defined by the theory of knowledge (and truth) hand in hand with the maxims by the knowledge-factory of applied epistemology. This edifice is not a description arrived at by observing the faculty of cognition from a vantage point outside it. It is a normative set of rules found to apply when cognition cognizes the possibilities of cognition itself, i.e. tries to find the objective and subjective conditions accommodating conceptual reason in general and formal reason in particular. ∫ Epistemology is always about both object and subject. To eliminate either one is to plunge the theory of knowledge into ontological illusion, solidifying the conditions of knowledge in a pre-epistemological ground outside knowledge. δ. Science deals with propositions arrived at by the joint efforts of experimentation & argumentation. The former is foremost an activity involving objects, the latter is foremost intersubjective. The discordant concord of both object and subject of conceptual knowledge is necessary. Each must defend its own interest while maintaining the discordant truce. This is essential to produce conceptual knowledge that works. ε. Both object and subject constitute conceptual knowledge, and each -driven by opposing interests- aim differently. On the one hand, testing requires the monologue of Nature. Only extra-mental data are sought. Nature is given the opportunity to answer questions in a clear-cut way. Theory nor intersubjective cognitive activity act as sources for this monologue. The issue is to know how Nature can be kicked and how Nature kicks back. On the other hand, argumentation is dialogal and so intersubjective. The monologue with Nature is silenced and replaced by discursive activities, involving theory-formation, discussion, dissensus, argumentation, consensus and theory-transformation. ζ. Testing and argumentation always imply a "ceteris paribus" clause and operate against the implicit or explicit background of untestable metaphysical speculations. Moreover, what science understands under "testing" is also undergoing change. Proposing hypotheses, conceiving tests to validate or refute these and carrying out controlled tests repeatedly is the simplistic approach to experimentation of physics-like science. In biology-like science this is not possible, for no two living things are exactly identical as are two elementary particles. Medical science cannot function without case studies, anecdotal reports, case histories etc. Insofar as science becomes biology-based, one may expect the emergence of consciousness-like science. The principles of the transcendental system give rise to a theoretical inquiry into the conditions of conventional knowledge. The mere possibility of a subject of cognition (the transcendental subject) becomes a concrete subject of knowledge. Likewise, the transcendental object turns into an actual object of knowledge. Theoretical epistemology studies the possibility & validity of scientific knowledge. It restricts epistemology to the formal and transcendental modes of cognition, trying to organize the possibility & expansion of scientific knowledge in terms of principles and norms a priori. Its critical format avoids a dogmatic ad hoc, nor a sceptic principiis obstat. Empirico-formal propositions are possible because facts possess, so are we obliged to think, extra-mental "stuff" informing us about absolute reality. Unfortunately, we only "catch" this with the "net" of our own theories, so lots of it slips through and is lost to us. Subject and object represent different interests but have to work together. Argumentation and testing are the tools with which scientific progress is made. Indeed, both intersubjective consensus as monologous correspondence offer the necessary criteria to validate empirico-formal propositions. § 3 The Validity of Scientific Knowledge. α. By shaping the unconditionality of the object of knowledge, the idea "absolute reality" or "reality-as-such" (the Real) guarantees the unity & the expansion of the monologous and object-oriented side of conceptual knowledge. This monologue intends correspondence (with facts). By shaping the unconditionality of the intersubjectivity of knowledge, the idea "absolute ideality" or "ideality-as-such" (the Ideal) guarantees the unity & the expansion of the dialogal subject-oriented side of conceptual knowledge. This dialogue intends consensus (between all involved sign-interpreters). These ideas do not constitute conceptual knowledge, they regulate it to bring about its highest unity & expansion. α.1 In every observation of fact, both regulations are simultaneously at work. The idea of the Real pushes the mind to pursue sensate adventures, whereas the idea of the Ideal brings its constructions in the larger arena of the community of interpreters of signals, icons & symbols, seeking consensus and approval. Experimentation concentrates on the real. Discourse, dissensus, argumentation and consensus on the ideal. Both intend to articulate empirico-formal propositions or statements of fact, in casu valid scientific knowledge. α.2 Experimentation, regulated by the idea of the Real, involves a one-to-one relationship with the object of knowledge, at the maximal exclusion of intersubjective dialogue and discussion. It is always instrumental. This is the image of "objective" science as the monologue of Nature with herself. The highest art of dialogue, regulated by the idea of the Ideal, involves the constant dialogue with & between other subjects of knowledge about ideas, concepts, theoretical connotations, conjectures or theories. Here we have the image of a community of people seeking the truth about something and communicating to find out what it is (as in the more contemporary forms of idealism and social theory). ∫ Valid scientific knowledge is the set of well-formed propositions validated by argument & experiment. β. The ideas of the Real and the Ideal converge towards an imaginal point, Real-Ideal or "focus imaginarius" which, as a postponed horizon, is a complete, universal consensus on the adequate correspondence between our theories and reality-as-such. The "adequatio intellectus ad rem" or "veritas est adequatio rei et intellectus" of the realist goes hand in hand with the "leges cogitandi sunt leges essendi" of the idealist. Both ideas are pushed beyond any possible limit (beyond "Diesseits"). Thus unconditional, they represent what transcends conceptuality ; a perfect unity between thought and fact, as it were the dwindling away of the theory-dependent facet of facts, a fiction brought about by the faculty of imagination. This heuristic fiction suggests a position "beyond the mirror surface", a "world behind" ("Jenseits") regulating the possibility of knowledge without grounding the latter or serving as its foundation. These two ideas voice the fundamental property of scientific thinking, namely the discordant truce expressed in the continuous & permanent confrontations between "testing" (object of knowledge) and "language" (subjects of knowledge). ∫ Not science, but transcendental philosophy unearths, posits & clarifies the rules of the game of true knowing. γ. Depending on correspondence & consensus, the empirico-formal propositions of science are valid or invalid. Valid propositions always call for both correspondence (between theory and fact) and consensus (between all involved sign-interpreters). The actual paradigm of science consists of all valid empirico-formal propositions. of science ∫ After millennia of invalid science posing as absolute truth, the question of validity is crucial. We don't need another dogma or anti-dogma, but a critical demarcation between what works and what does not. δ. On the side of the object of knowledge, we must think "reality-as-such" as knowable, but this without being conceptually equipped to know whether this is the case or not. Absolute reality, apprehended by nondual cognition as absolute truth, is ineffable. Facts are intra-linguistic and so co-determined by the notions, opinions, ideas, theoretical connotations, hypothesis & theories formulated by the subject of knowledge. But facts are also -so must we think- extra-linguistic, i.e. the messengers of this absolute Real. Given this ambiguity, facts do not a priori represent absolute reality, nor reality-for-me, but merely reality-for-us. ∫ The letters of confidence presented by facts may be fakes, and in an ultimate sense they are. Insofar as they conceal process, they merely appear as substances. ε. On the side of the subject of knowledge, we have to think the "consensus omnium" as possible (without us ever reaching it in an actual discurus). In this way ensues the distinction between (a) "my" consensus (with myself), (b) "our" consensus here & now (i.e. the agreement between the users of the same language) and (c) the "consensus omnium", the regulative idea on the side of the subject of knowledge. The theory-dependent facet of facts is intra-linguistic. It belongs to a theory to form a pattern of expectation. But this pattern, although always rooted in my subjectivity, is in truth always inter-subjective, belonging to a community of communicators using signs (signals, icons & symbols). ∫ The power of conviction portrayed by an actual consensus may be fallible, and in truth it is. Concealing change, conviction merely appears as solid, lasting & trustworthy. ζ. In the present critical theory of truth, merely seeking to find reasons to accept a theory as if true or conventionally true, the following categories emerge : • the subject of knowledge / the one thinking / intersubjective discourse or dialogue (consensus, dissensus, argumentation, consensus, etc.) / consensus omnium / the idea of the Ideal ; • the object of knowledge / what is thought / monologous testing (experimental setup, tests, observations) / adequatio intellectus ad rem / the idea of the Real. It depends on transcendental philosophy to unearth the conditions of this capacity of the mind to apprehend the truth of the matter. This discipline does not belong to science, but exclusively to normative philosophy. A theory of truth explains how to validate empirico-formal propositions. Testing statements of fact, but observing no correspondence with the facts, means invalidating them. To discuss these propositions, but finding no consensus regarding them, invalidates them. Being insignificant (in the statistical sense), they cannot enter the current paradigm of science. The ability to validate propositions is crucial to science. In a realist account of knowledge, one grounding the possibility of knowledge in a pre-epistemological self-sufficient ground, in casu, the Real, validation is induction. Accumulating data is supposed to lead to generalizing statements of fact. Logically incorrect, induction fails to deliver. A finite set of observations cannot back a general statement. Dogmatic falsificationism avoids the problem of induction by turning things upside down. Instead of starting with a number of individual propositions from which to derive a general law, it begins with a universal statement and tries to find exceptions. If one is found, then the general statement is refuted or falsified. This variant of empirical justificationism accepts a theory can never be completely justified. Hence, the more it is corroborated, i.e. withstands attempts at falsification, the more trustworthy the theory becomes. But the naturalistic, onto-epistemological presence of a given empirical ground is not yet left behind. A pre-epistemological moment is retained. Refined falsificationism no longer accepts any "ontological" confrontation between theory and fact. Coherence replaces correspondence. Only theories clash. This answers the question of how to translate sense-data in propositions. Only propositions clash. Critical theory adds the hybrid nature of facts. Janus-faced, they are two-faceted : one, turned towards the subject of knowledge, is theory-dependent and intra-mental and the other, turned -so must we think- toward the reality of the object of knowledge, is theory-independent and extra-mental. We recognize something as "a fact" because our theories allow us to do so and because this fact acquired, so we believe, the guarantees of absolute reality (the Real). In an idealist account, an ideal self-sufficient ground is designated. Conforming facts to mentality, idealism is generated whereby the object is constituted by the subject, by the Ideal. But a general consensus neither delivers, for facts must refer to extra-mental phenomena, and so in some way have to escape language. But both positions do contain a nugget of gold. Realism makes us understand knowledge implies a known and the latter cannot be exclusively mental. Idealism points to the intersubjective use of language, and the theory-dependence of observation. So in terms of validation, a reconciliation or coherence between a correspondence theory of truth and a consensus theory of truth accommodates the critical understanding of how knowledge is validated. This happens in a transcendental coherency theory of truth. On the side of correspondence, test & experiment stand out. They are deemed a monologue with Nature. Here is decided which possibility (out of an infinite set of possibilities) will actualize to become concrete. On the side of consensus, intersubjective dialogue is at hand. This dialogue involves all possible speech-acts done in the pursuit of knowledge and its advancement, but may be restricted to conjecture, disputation & (dis)agreement. The interaction between both interests assists their entanglement : disagreement invites new experiments and new experimental results brings about conceptual changes calling for a new discussion, etc. The ongoing nature of this process of communication intends to harmonize correspondence & consensus. Because no direct, one-to-one observations of the Real, nor the realization of the Ideal by a concrete community of sign-interpreters, are accepted, criticism opts for a transcendental coherency theory of truth. § 4 Casus-Law : the Maxims of Knowledge Production. α. What scientists have been doing (diachronical) and what they do today (synchronical), is not identical with the principles and norms of knowledge they are always using (and abusing). β. Theoretical and applied epistemology are both necessary. The former may be compared to "statute-law", universal, imperative and normative, the latter to "casus-law", local, adaptive and descriptive. Contextualism and decontextualization are both necessary, and so emphasis on either "what must" or "what is", is lacking. A pluralistic system of authority between them is needed. γ. In applied epistemology, the context of knowledge-production is studied, and so the principles & norms of knowledge are not made explicit. In every concrete situation they are at work and are addressed. Theoretical epistemology is general & necessary (a priori), applied epistemology is contextual & situational (a posteriori). The latter affirms the laws of discovery to be context-specific and complex, far beyond the capacities of a simple formal logic. ∫ Good scientific research depends on many important factors outside the conditions of epistemology, like for example enough orgiastic sex. δ. To ask : Quid juris ? is to foster the normative approach prevailing in theoretical epistemology. As such, validity and justification of knowledge rule over how it is produced. In applied epistemology, the logic of discovery answers the question : Quid facti ? This is the difference between the idea of a stable and universal method and the constant revision of standards, procedures and criteria as one moves along and enters new research areas. Take note of the distinctions between the principles of transcendental logic, the norms of theoretical epistemology and the maxims of applied epistemology. These rules of transcendental philosophy aim at different objects, namely the general structure of cognition, the conditions of conceptual knowledge & its validation and the production of valid empirico-formal propositions. ε. The general structure of applied epistemology is derived from theoretical insights, for (a) the subject of knowledge and its norms becomes the subject of experience and (b) the object of knowledge and its norms, the object of experience. In physical science, the latter is given form as the rules of experimentation, whereas in the human sciences, the rules of participant observation are applied. Both make use of this-or-that actual discourse, with its non-strategic communication (dialogue, dissensus, argumentation, consensus). The maxims ruling an actual research-cell are not like binding norms. Deviation from them is possible, but not advisable. Violating a maxim does not entail the end of the possibility, unity & expansion of knowledge, but slows down its actual manufacture. The process of production is not halted (and replaced by an illusion), but efficiency drops. Hence, the research-cell at hand will suffer and become a less attractive competitor in the market of available facts. To produce knowledge, there are no absolute rules. Once its actual process of manufacture is set afoot, merely valid theories & rules-of-thumb prevail. The latter cover argumentation & experimentation. Nevertheless, these relative constructs are important and do result in scientific advancement. The opportunism and contextuality of some of these procedures underlines the conventional nature of scientific knowledge. Although science is the pinnacle of conventional knowledge (in the mode of formal reason), it ever remains a relative, fallible and incomplete attempt to understand Nature. To consider it as solid, unchangeable and secure is merely a waking dream. Conceptual reason is simply not equipped to grasp the absolute Real-Ideal. Science is terministic, probable, conventional. Only a humble & kind science is a true science. Conventional knowledge, whether valid (as in the case of science) or invalid, misrepresents the world. The maxims of knowledge-production call to methodologically accept realistic correspondence & idealistic consensus as if. The way of science must confirm the substantial nature of its objects, and in epistemology, at least as a method to expand knowledge. Physical objects must be independent & separate. Because of this reification baked into the methods of science, conventional knowledge is valid but mistaken. It is valid because (a) this knowledge truly functions in terms of material, informational & sentient features and (b) its objects exist in a relative, impermanent, interdependent way. It is mistaken because it reifies its objects into static entities, concealing their fundamental process-based nature. § 5 Metaphysical Background Information. α. The proto-rational, formal, transcendental (critical) & creative modes of cognition are conceptual. Together, they form the set of all possible conventional knowledge. Through proto-rationality, the ante-rational remains linked with rationality. In these early stages of the development of the mind and its cognitive apparatus, we call forth our unconscious metaphysical beliefs, dreams and expectations. ∫ Refusing pain (denial) and seeking pleasure (identification) are the earliest ego-building operations the mind familiarizes with. β. The integrated presence of the ante-rational mind in the higher modes of conceptual cognition can be traced as generalizing beliefs and unarguable "feel right" frameworks. By countless ante-rational coordinations of movements, their introjection & stabilization as mythical, pre-rational and proto-rational mental operators, continuity, tenacity, substantiality, solidity, independence, separateness etc. are given form. ∫ To know what to refute is to be able to identify the truth more clearly. γ. The problem situations encountered in science are due to three factors, namely (a) inconsistency between a ruling theory, (b) discrepancy between theory and experiment and (c) the relation between theory and metaphysical background information. The latter not only determine what explanations we choose to attack, but also what kind of answers are fitting, deemed improvements of or advances on earlier answers. This background results from general views of the structure of the world. Themselves untestable, they are speculative anticipations of testable theories. ∫ How many times we (dis)like something without good reason ? δ. Let us considering a few historical metaphysical backgrounds : • Parmenides : the universe is deemed full, there is no void  or empty space. Hence, motion is impossible. A genuine worldview must be rational and so devoid of contradictions ; • Democritus : all change is nothing but movement of atoms in the void. The world is "full" and "empty" at the same time. There is no qualitative change possible, for only rearrangement pertains ; • Pythagoras & Plato : for Pythagoras, the cosmos was arithmetized, a view abandoned with the discovery of irrational numbers. For Plato matter is formed space, geometry explains the universe ; • Aristotle : space is matter and the dualism of matter and form (hylemorphism) takes over : the essence of a thing inheres in it and contains its potentialities ; • Descartes : the essence or form of matter is its spatial extension. All physical theory is geometrical. Causation is push or action at vanishing distance. Qualities are quantities ; • Newton : causation is by push and central attractive forces (gravity). Every change functionally depends on another change (cf. differentials). Action-at-a-distance seems the only way to explain the central forces ; • Maxwell : not all forces are central, for changing fields of vectorial forces exist whose local changes are dependent upon local changes at vanishing distances. Matter may be explained as fields of forces or disturbances of these fields ; • Einstein : matter is destructible and inter-convertible with radiation, i.e. field energy and thus with the geometrical properties of space. Geometrization of fields is at hand. • Bohr : before observation, the quantum phenomena exist in a paradoxical state of superposition ruled by quantum logic, and turn only into this particle or that wave after being observed. Most of these vast generalizations are based upon "intuitive" ideas, some striking us now as outdated and mistaken. They presented a unifying picture of the world. More of the nature of myths, they helped science to find its purposes & inspiration. ∫ Stylish caprice, sharp opportunism & clear improvisation instead of strict lawfulness are the ornaments of the rule of inventivity. Identifying a substantial, self-sufficient ground or "hypokeimenon" may well be called the fundamental metaphysical dream of the West. Dreaming such a primary reality, existing alone without need of anything from outside, as it were "standing under" phenomena and determining "what they are", means allowing something uncaused or self-caused to possess attributes inhering in it without it inhering in anything else. Insofar as this self-sufficient ground is deemed primary, it is an ultimate substance and so indestructible. The failure of this metaphysical background is evident. Has a single primary substance been identified ? If so, where is it ? ∫ Looking for substance instead of process is our ground addiction. Like fish in the water, we are blind to it. Critical epistemology accepts the task of critical metaphysics to inspire scientific research. It brings the implicit metaphysical background to the surface and identifies its frailties. Substance-metaphysics has to make way for process-metaphysics. The fundamental sufficient ground of all possible phenomena is not an independent, separate, uncaused or self-caused primary substance, featuring properties inhering in it, as it were  it and for itself, from its own side, self-powered. ζ.1 In the categories of Aristotle, substance, quantity, quality & relation do exist inherently. Likewise, space, time, matter & momentum are deemed absolute. In essentialism or substance philosophy, discrete individuality & separateness are therefore linked. A fixity within a uniform nature defined unity of being. This allows for descriptive & classificatoric stability & passivity. ζ.2 The new metaphysical dream features interactive relatedness, wholeness, novelty, agency, productive drive, fluidity and evanescence. Instead of a unity of being defined under individualized specificity, there is unity of law under functional typology. Science and pre-critical metaphysics cannot be reconciled. Metaphysics no longer acts an a pre-epistemological archaeology & ontology, defining the self-sufficient ground and erecting an architecture upon it. This precisely because it is untestable and so has no sensate objects to offer. Only the language-game of true knowing provides the rules of engagement, setting in motion the process of the manufacture of knowledge. This is conventional knowledge, valid insofar as theory & experiment dictate. Relative and fallible, it cannot be considered permanent or absolute. Moreover, it cherishes a substantialist streak, albeit methodologically. Metaphysics becomes "critical" when the demarcation with science is maintained : science is arguable & testable, metaphysics only arguable. Critical metaphysics is the heuristic of science, its "ars inveniendi". It stays close to science and its development, in particular to the fields of cosmology, physics, biology & anthropology. Moreover, despite the demarcation, it is impossible to eliminate generalizing ideas from the background of scientific research. Metaphysics is a "vis a tergo". Argumentation & experimentation are always conducted with the help of such metaphysical dreams. Insofar as they are implicit, they cannot be manipulated to help current research and so may eventually hinder it. This has to be amended. Bringing these to the surface is understanding the metaphysics internally driving scientific work, the beliefs carrying the work of reason. Changing the background to accommodate research is therefore primordial. In view of the long essentialist tradition, one cannot stress enough the importance of process, change, transformation and creative advance. Logically, this is the transition from substantialism or x to process or x, and this by negating x.   or "there is" : is the affirming persistent existence of "x", in casu the existentializing quantor confirming the permanent existence of x or x. The dream of finding this indestructible, unchanging substance is over. The hypnotic spell of Plato's dreamwork is broken. Socrates refuted not enough ! The thinkers of Antiquity, the Scholastics & the Modernists posit substance. Ultimate analysis awakes one to the realization all phenomena are impermanent and devoid of own-power. They are other-powered. If postmodernism was the unavoidable deconstruction of the Modernist dream, then hypermodernism is the affirmation of process and its architectures. May this be the beginning of the final movement in the long march of emancipation of humanity, the emergence of a global consciousness and its subsequent cultural objects. This New Renaissance is not a return to late Antiquity and its Platonism, but an advancement reconciling process and change with interdependence, and the need for a global organization of the affairs of Earth in all crucial issues. E. Thinking Metaphysical Advancement. Because polemics are not the issue here, this paragraph is kept to the bare minimum. Suppose we think metaphysics or philosophy in general is still in the business of discovering a self-sufficient, substantial ground. Given modern science, in particular physics, has taken over such fundamental preoccupations, one may decide metaphysics no longer has any role to play and so just oust it. Philosophy itself, i.e. this irresistible & definitive longing for wisdom, may be crippled and turned into another ivory tower of academic pursuit, merely offering the logistics. One may wonder in what measure such an instrumental, uncritical and non-innovative approach bleaks the original beginner's attitude called for in a serious, prolonged and free engagement in this science & art of the love of wisdom. Denying the very need of metaphysics, any argument backing the notion of the advancement in metaphysics must involve a contradictio in terminis. But here ontology is not the aim. A comprehensive, coherent & scientific worldview is. Critical metaphysics is aware of its initial border with science. It only leaves this behind for the final border of transcendent metaphysics, but never without identifying the transcendent signifiers of un-saying. Especially within the immanent order of actual occasions. In the present exercise, mindful of Ockham's Razor, the principle of parsimony, these must be kept to the bare minimum : (a) emptiness inseparable from (b) the Clear Light* of the mind, the seed of awakening ("bodhi"), the potential of enlightenment, forming together full-emptiness. To identify metaphysical advancement, one has to know what metaphysics is all about. Inspired by science and on the basis of a theory on existence (ontology), immanent metaphysics argues a totalizing, comprehensive framework speculating about being, the origin of the universe, life and the human. Its focus is on actual occasions and their concrete form. Transcendent metaphysics probes into absolute, infinite existence, into pure formless possibilities, the "pure ground" of lacking ground. This is a sufficient ground, but not a self-sufficient ground. Insofar as critical metaphysics goes, speculative advancements are possible. But finding the proper conditions or rules of comparison is crucial. The criteria of instrumental action or experimentation should not be applied here. For in this case, there is no increase in "factuality", but in "mentality". Those who confuse both assume (or force) philosophy to be the copy-cat of experimental science or mathematics. Instead, our criteriology identifies advance by using the logic of communication, a hermeneutics of logical & semantic moments of progress. ∫ Establishing a right view or vantage point is the beginning of thought. § 1 The Mistake of Absolute Relativism. α. In brief, the present metaphysics of process does not endorse absolute relativism. While the intelligently organized interdependence between all possible phenomena is accepted, some special & exceptional items are found and kept absolute. Ergo, the absolute is not rejected, banned, ousted or negated, but given its most efficient role, whatever that is. Therefore, theology, theophany and theonomy are possible, but -given the conceptual limitations of transcendent metaphysics- bound by the rules of non-contradiction and inviting ongoing remodelling. β. The rules of normative (transcendental) philosophy are found to be "of all times". Indeed, they are always in the process of being used by correct conceptual thinking. One cannot even deny their use without using them ! Process ontology argues absolute abstract forms or formative abstracts, mere potentialities like primordial matter, creativity & God*. Likewise, in science, constant values are also found. Very small changes in the highly intelligently chosen natural constants would make the physical world devoid of life & sentience. γ. Transcendent philosophy, using the benefits of ultimate analysis, establishes the non-separability between, on the one hand, the absolute and emptiness and on the other hand, emptiness and the original mind of enlightenment. This is the absolute united with the nature of mind, the Clear Light* ; the nondual realized by the absolute experience of duality. So also here absolutes pertain. δ. Consider "everything is relative" and "no absolute exists". If the view expressed in these statements is relative, then an absolute might exist after all. Ergo, they are ineffective. But if this view is absolute, then it refutes its own claim ; a contradictio in actu exercito. In both cases the statement is undermined. Saying philosophy knows no advance because all statements are relative is denying historical process and the unfolding architectures of thought. ∫ Some things change while other things are kept constant. Some things are always the same and some things change all the time. The relative and the absolute walk hand in hand. In a general sense, universal relativism is rejected while evolutive, negentropic change (in dissipative, highly intelligent, chaotic living systems) is accepted. This not only involves efficient determining factors, but also state-transformative ones, entering the efficient causation of other actual occasions. A universal continuous creative advance is thus at hand. All objects of immanent metaphysics are constantly changing. But this change is not random, amorph or without outstanding features. The change has an architecture involving constants, i.e. principles uninfluenced by the momentum of universal creativity. § 2 Logical Advance. α. Well-defined logical operators increase the quality of communication. But way before this is established, the importance of a priori structures needs to dawn. Then necessity enters the picture and absolute truth becomes singular, for there cannot be two absolute truths, only one. All this was realized by the Eleatics. Before them concepts remained confused because of an attachment to context enforced by the rules of ante-rationality. Formal reason required abstraction, necessity and the ideas of "everywhere" and "always". The Sophists, using logic but arguing absolute relativity, did inspire the concept-realism of the classical systems of Plato & Aristotle, both retaining the concept of the absolute and desperately trying, to justify the objects of knowledge, to find an absolute self-sufficient ground outside knowledge. β. In Late Hellenism, and particularly for the Stoa, language became an independent area of study. Logic was not longer embedded in metaphysics, but part of the new science of language (linguistics). The technical apparatus developed by the Platonic and Peripatetic schools, as well as the mechanics of classical formal logic had been fully mastered. An overview of knowledge was sought, and concept-realism still prevailed. Concepts were either rooted in universal ideas or in immanent forms. Physics studies things ("pragmata" or "res"'), whereas "dialectica" and "grammatica" study words ("phonai" or "voces"). The term "universalia" (the Latin of the Greek "ta katholou") denotes the logical concepts of "genus" and "species". The apory between Plato's world of ideas and Aristotle's immanent forms, is no longer part of the Stoic context. A simplification took place bringing logic and linguistics to the fore. γ. In the Middle Ages, the apory between exaggerated realists ("reales") and nominalists ("nominales") saw the light. It was a logico-linguistic transposition of the ontological apory between Plato and Aristotle. This advancement was considerable and led to William of Ockham, who finally relinquished concept-realism and formulated radical nominalism. The foundational approach was left behind. In all cases, the nominal representations arrived at are terministic, i.e. probabilistic, stochastic. They concern individuals, never extra-mental "universals". Science deals with true or false empirico-formal propositions referring to individual things called "facts". These empirical data & conceptual constructions are primordial and exclusive to establish the existence of a thing. With Ockham, conventional knowledge acknowledged its frailty. δ. While in the course of history, logic became an independent discipline within philosophy, transcendental logic defined a direct impact on our grasp of the possibility of knowledge and its production, in particular of science or established conventional knowledge. This after millennia of extreme views, both from the side of the object (as in empirism) and the subject (as in rationalism). Arising in Western philosophy, but absent in pre-Kantian  philosophy, this logic and its articulation point to another crucial step forward in the process of the ongoing advance of the longing for wisdom. Although its early mistakes spurred the ontology of the idealists and the irrationalism of the protest philosophers, criticism has radically & irreversibly ended the long-reign of metaphysics over epistemology. To constructively engage critical metaphysics in the vicinity of paradigmatic science, is to be aware logic is unable to radically ban speculative, totalizing views from science. Working together, two extremes bring forth the Middle Way. The idea philosophy does not advance and has no paradigm-shifts is wrong. Creative advance affects all phenomena, and philosophy is not an exception. The "death of philosophy" league has tried its best shots but failed. The old roaches are not gone. In fact, their moves are so perfect, they are bound to stay. Logical & semantical advance stares one in the face. While these improvements touch areas later becoming specialized fields of learning of their own (like logic), they also affect the core business of philosophy : to propose a reasonable worldview or total view involving all (known) actual occasions. Meaning-shifts redefine both object and subject of this quest. In the West, the pivotal paradigm-shift was announced by Kant. Although he wanted to secure the necessary and universal status of "rational" knowledge modelled on Newton, his transcendental method proved to be the beginning of the end of substantialism (essentialism) in epistemology and philosophy of science. Moreover, his analysis would eventually raise the important question of the interpreted nature of the sensate & mental objects grasped by the knower. If all conventional, rational, conceptual knowledge is an interpretation, not the "real thing", then all conceptual knowledge is "for us". How can we truly know such relative knowledge is about reality/ideality "as it is", i.e. about the absolute ? Conceptually, there is no way to answer this. We must accept facts are also extra-mental, but we could be fooling ourselves. A subtle epistemology is aware of the possibility of this universal illusion. A study of knowledge stressing the production (praxis) of knowledge would probably miss it. But in the field of theoretical epistemology it acts as a very powerful reminder of the relativity of all possible conventional knowledge. § 3 Semantic Advance. α. To establish a clear-cut difference between object and subject is the logical prerequisite for semantic stability. This calls for a semantic field of denotations & connotations part of an architecture and a dynamic flow or "stream" of sensate & mental referrers. The history of these semantic fields is remarkable, giving rise to a multitude of views concerning objective and subjective phenomena and their states. ∫ A clarification of views results from integrating many different vantage points. β. Take the "psyche", evolving from a gaseous entity (Homer), to a meaning-giver, a sign-user of symbols, icons & signals, in short userware. Take matter, from a solid, self-contained ground (Ionian thinkers), to a stochastic process involving particle-fields or matter-waves (hardware) and an intelligent code ("logos" or software). Both semantic fields result from previous articulations and the process is ongoing. But a slow integration & clarification is present. This points to semantic advance. It is impossible to include an evolution of the philosophical vocabulary of the West since Ptahhotep (ca. 2300 BCE). But such a project would present the case of (a) countless redefinitions of a series of basic terms referring to certain recurring sensate & mental objects and (b) a number of drastic meaning-shifts in the denotations & connotations present in the semantic field of these terms, leading to a very slow but definite creative advance. Four dazzling moments : (a) Greek civilization realizing the decontextualized mode of thinking according to formal logical rules, (b) Kant initiating his Copernican Revolution, (c) Wittgenstein defining the meaning of words as their use, (d) Derrida deconstructing the transcendent signifiers. 1.2 Immanent Metaphysics. Since when did humankind's curiosity start to extend beyond the satisfaction of mere instrumental & strategic needs ? When did total observation dawn ? First as a view to totalize the experience of the world and then as questions about what lies further than the horizon, about the beginning & the end of oneself and the world. This supposes communication, the process of conveying information and connecting with other sign-users of signals, icons & symbols. Striking evidence of this cycle of communication, stamping temporary glyphs upon physical states, is found in the French cave of Peche Merle, around 16.000 BCE. It is the representation of a human hand ! Iconically & symbolically, the Upper Palaeolithic is rich. The Cro-Magnon worshipped the Great Mother Goddess and manipulated a variety of symbol-sets. These superior hominids were able to symbolize their experiences. They invented initiatory rites and a variety of tools. Moreover, before them Homo sapiens Neanderthalensis was religiously active (cf. their cult of the dead - ca.30.000 BCE). The Neolithic (ca. 10.000 BCE) brought a fixed horizon of observation and the agricultural cycle. If earlier glyphs were mostly Lunar, diffused and fertility-based, they soon became Solar, centered and organizational. Experience moved from a variable local horizon to a fixed one, empowering economical & political stability. The advent of Pharaonic Egypt is an enduring example. These prehistorical, ante-rational & bi-polar symbols are a treasure-house of images & metaphors. They are contextual pre-concepts & concrete "operational" mental procedures. In a less coarse mentality, they work in the background of future metaphysics, underlining the bi-polar experience of the world. • immanent symbols :  "phusis", accidental existence, world of becoming, Demiurge, Generator, Conserver, She, pantheism - the Lunar symbols ; • transcendent symbols : "arché", substantial existence, essence, world of being, God, Creator, He, theism - the Solar symbols. The Latin roots of the words "immanent" or "in" + "manere", to remain, and "transcendent" or "trans" + "scendere", to climb over, point to the ideas of the proper part or character of something and the absence of such. Every x is immanent to y if, and only if, x is a proper part of y or a character (proper or inherent property) of y. This belongingness and interrelatedness (interdependence) is reflected in fertility-symbols and the mystery of life & childbirth. Every x is transcendent to y if, and only if, x is not immanent to y and there is a z immanent to y serving as an indicator of x. The notion of x being superior, more exalted or ontologically higher may be added, but this is more a kind of theological compliment. This otherness and sacred separation-from is found in all forms of paternalism, conservativism, authoritarianism, centralism & royalist. This is the mystery of the hunt & the kill. Immanent metaphysics strives to realize a comprehensive view of the whole spectrum of actual occasions displayed by the two outstanding ideas of reason : reality & ideality, both rooted in transcendental logic. It dares to speculate and seek out the periphery of the objective world, as well as the frontiers of the mind and its cognitive possibilities, including the realization of the absolute & relative minds of enlightenment for all sentient beings. Attentive of critical thought, immanent metaphysics, remaining close to science, merely assists in the introduction of transcendence. Although still conceptual, it cultivates the creative mode of cognitive functioning. This mode invents speculative conventional knowledge inspiring the advance of science and inviting the final frontier. It serves the conventional. Pre-critical, it affirms the inherent existence of the world and its actual occasions. Doing so, it superimposes the mere illusion of inherent existence upon the world. To strip ontology from this will be the task of ultimate analysis, the conceptual device ending the reifying tendencies of conceptuality, stopping its substantial instantiations. As the muse of science, immanent metaphysics does not accept determinations like First Causes, to operate from outside the world. In fact, the world is not determined as finite or infinite. The world is merely that what is, the set of all actual occasions or actualizations of potentialities. The highest creative hyper-concepts are limit-concepts, always referring back to conditions remaining part of the world. To define the latter, the results of experiments and the outcome of argumentation prevail. Given the condition of immanence, this situation of being within and not going beyond a given domain, is left and -inviting an infinite regress- a First Cause is posited ad hoc. Then a grounding explanatory principle outside the world ensues. There are no valid arguments to back this and therefore transcendent metaphysics cannot be conceptually elaborated without obfuscating reason. We cannot move beyond the view of an explanatory principle lacking any self-sustaining properties, empty of itself and full of the manifold of architectures of interdependence and interconnectedness, of actual occasions entering the creative life of other actual occasions. This is the impact of ultimate analysis on conventional knowledge. This use of the word "immanent" reminds of the distinction mentioned by Aristotle (Metaphysics IX, viii 13), namely between an actuality residing in a thing and one not abiding there. Is the realization of the end of an action part of the action or does it transcend it ? The intent of this realization is always immanent to the action, but is the realization of its end ? For Kant, the use of an idea can either range beyond all possible experience or find employment within its limits (Kritik der reinen Vernunft, A643/B671). For Husserl (Logische Untersuchungen, 1900), the act of consciousness is deemed intentional, i.e. directed to an object. This directedness, intentionality or "prise de conscience" is immanent to the act of consciousness, the object intended is not. Immanent metaphysics must be able to argue a comprehensive rational picture of the metaphysical horizon, integrating a wide variety of scientific data. Insofar as transcendent metaphysics, being nondual, cannot be verbalized, all efforts to stretch beyond immanence must be deemed futile and, at best, of sublime exemplary poetic value only. Can validation have meaning in nondual terms ? As authenticity perhaps ? Then only in what one does and in what one does not may traces of it be found ... In a "Diesseits" metaphysics staying within the limitations of possible experience, the world is all there is and the existence of something is only the instantiation of its non-inhering properties. Science observes and argues a series of predicates ascribed to objects, and pours these transient connections in non-eternal, probable, approximate synthetic propositions a posteriori. Using this information alone, no necessary Being can be inferred. Cognition is empty of substantial self. The highest being to be inferred a posteriori remains proportionate to the world. Only an immanent natural theology is possible. As nonduality is cognitive but non-conceptual, it merely leads to a theognosis, not to a theology proper. In a classical, Platonizing transcendent "Jenseits" metaphysics, there is more than the world of experience, for the latter, in phenomenological terms, i.e. as revealed by the things themselves, is merely the theophanic contraction of absolute Being. Hence, each fact reveals more than the series of property-predicates ascribed to it, for each fact is (also) an epiphany or substantial self. To supersede the world, is to stand in one's own essential Being or being-there ("Dasein"), self-sustained with inhering properties existing from their own side, self-powered. The a priori arguments of Anselm of Canterbury, backing the ontological proof of God, aim to posit this transcendent Being as an existing Being analytically, thus including the finite world in infinite Being. They fail to deliver this (cf. Criticosynthesis, 2008, chapter 7) and, in order to book any success, need to axiomatise (a) substantial existence and (b) a semantic adualism between the subjective mind and the extra-mental, called "outer" world. In the radical nominalism of critical thought, such a substantialist, essentialist axiom is not retained. In a first movement, metaphysics is immanent and a heuristic, speculative, suggestive, innovative and spiritualizing system of arguable & totalizing statements about the world. In particular how the cosmos came about, how life emerged and what the nature of sentience is ? In a second and final movement, metaphysics moves beyond the world. If so by positing a "higher" ontological self-sufficient ground of any kind, i.e. a positive concept, then the apex of cognition has been reified and one enters the domain of nonduality as a substantialist, leading to the extremes of radical non-affirmation (of anything) and radical affirmation (of an eternity of sorts). This is a return to the tragedy of pre-critical metaphysics. However, the "essence" or "substance of substances" aimed at in such a traditional transcendent approach cannot be found. What can be experienced is not a substance, but a process and it is ineffable. It may be shown as an object of art or possibly given as the sacred or the holy in direct mystical experience and its religious superstructures. Never conceptual object-knowledge, it is born from the light of activity, i.e. performed, acted, done. If transcendent metaphysics avoids positing a self-sufficient ground outside the world -accepting there is but the world and that is it- and merely points to the set of "all possibilities", it may introduce the transcendent, absolute, ultimate nature of all phenomena as (a) the absence of substantial ground and (b) the set of all potentialities, virtualities, open possibilities manifesting as actual occasions. And these non-temporal formative elements or abstracts are themselves not actual occasions. So in the meta-nominal, meta-rational stage of cognition, two modes are distinguished : • the immanent : the contemplative, creative activity of the arguable, non-factual ideas (hyper-concepts) of the (higher) self, perceived by the intellect (cf. immanent metaphysics) and • the transcendent : the nondual activity suggested by the direct discovery of the unconditional core of all what is. Immanent metaphysics looks at objective reality & subjective ideality. Its only merit is being comprehensive in an intelligent way. Both reality & ideality, sensate & mental objects are actual events, or a set of moments defined by differentials, i.e. immeasurably small droplets part of the ongoingness of the worldstream. To divide this stream into these two sides or banks reflects the conditions of cognition as they exist since the onset of semiotical functions. In the mythical phase of cognition, only differentiations in the coordination of movements prevailed. Pre-rationality sees the birth of duality as a mental construct. Duality is reified in concept-realism, affirming the substantial existence of things. From material coordinations of movements, the material operator or functional signature of physical actual occasions complexified, allowing a creative advance introducing logical & efficient coordinations and with them the informational operator. Both sets of actual occasions worked together, producing the product of differences characterizing energy and with it life. These highly complex, dissipative & chaotic living systems became sentient the moment they consciously began to coordinate their activities and use signs to modify themselves & their environments. In this short universal ontogenesis, a complexification & differentiation happens. Duality is at the heart of this. At the level of sentient organisms using conceptual thinking, the distinctness between object & subject is so prominent, it easily gives rise to the wrong view of their difference. Duality is not the problem, but its reification is. Things are not different, they are distinct. Given the dualistic structure of conceptual cognition, immanent metaphysics formulates an onto-categorial scheme featuring objective & subjective aspects. The scheme describes the basic operators of the existents, i.e. that what exist or is. In the immanent scheme, the ongoing world-process is considered given and not questioned. Access to this process is by the senses and the mind. The senses provide us with sensate objects, the mind with mental objects. Both objects are possessed by an object-possessor, the mind. When, thanks to observation (testing) and communication (arguing), facts are cast in empirico-formal propositions, and valid conventional knowledge or rational object-knowledge is acquired, the conventional condition of all possible direct experience is satisfied. Both vectors producing factual knowledge have done their job. Then, backed by the propositions of science, a broader, more speculative horizon may be argued. This is the exercise of a critical metaphysics never stepping outside the limitations of possible experience mediated by concepts. To format the objective side of our proposed immanent metaphysics, we devise a framework directly derived from the structure of the sphere of observation. This structure is universal and so holds for all possible observers. It is also a necessary empirico-linguistic framework without which no observation would be possible ! Take away a condition, and the possibility of observation itself vanishes. All empirico-formal statements of fact made by an observer about the observed are always & everywhere necessarily framed by the local rotating sphere of observation of the observer, universally & globally defined by a horizontal plane with four cardinal points of reference (East, South, West, North) and a vertical plane with two points of reference (Nadir, Zenith), i.e. by six directions in space. Counting the intermediate directions yields 10 directions and one direction in time. This sphere is not merely a static spatial reality but a continuous, ongoing process in time. Frozen, it represents only a single moment or instance of the mundus. the mundus : the sphere of all possible observers • horizon of observation = circular field representing the consciousness of the observer, defined by divergence, namely of four quarters rooted in O, the neutral origin of the sphere (0,0,0) and of the interconnectedness evidenced by all objects possessed by the observer ; • prime vertical = evolutionary field of an observer moving upward and doing so enlarging the local horizon from origin or nadir, to final aim or zenith, reflecting the convergent evolution of each single observer ; • actual orientations P1, P2, ... = actual positions of observation taken by the observer within the boundaries of the sphere at any given moment in space-time ; • diurnal hemisphere = the realm of consciousness awareness ; • nocturnal hemisphere = the realm of unconscious awareness ; • the sphere as a whole = the totality of all immanent realities and idealities or possible actual occasions happening to any observer - the object of immanent metaphysics ; • the periphery of the sphere = limit-concepts defining the boundaries of the sphere positively and its transcendence negatively ; • the beyond the sphere = the ineffable transcendent, knowable but non-conceptual, non-separation of potentiality & nature of mind. Although each observation is as unique as its local sphere, its geographic analogies are universal as in a global sphere. If the local sphere of a single observer provides the semantic architecture for a particular "reality-for-me", then the conventional sphere of a multitude of observers reflects "reality-for-us". The horizontal plane being associated with (a) the diversity of beings and the way they interconnect despite their divergence and (b) their respective "horizon" or limitations. The vertical plane involves the evolutionary process of each, moving from nadir to zenith, calling for the dynamical convergence and the ongoing creative resolution of both Epimethean and Promethean interests. On the subjective side of our proposed immanent metaphysics, an open, bimodal & dynamic subjectivity is designated. Albeit one more extended than what the empirical ego has to offer, even in its non-substantial, intersubjective format. Although no longer substantial and solitary (Cartesian), epistemology confirms the empirical ego to need the transcendental I to maintain the unity of the sensate & mental manifold constantly arising in the consciousness of any observer. For Kant, and rightly so, this was an empty self "for all times". The self argued by immanent metaphysics is a dynamic continuum of higher states of consciousness grasped by a higher ego, wrongly designated as an inherently existing self or soul. This yields a bimodal structure with (a) an empirical ego at the centre of a circle or field of consciousness grasping sensate & mental objects and (b) a higher self grasping at hyper-concepts. In this scheme, the proposed higher self, acting as a kind of bridge to the nondual, is not a way to gain direct access to "reality-as-such" or absolute reality, nor does it cause the latter. In the past, this self was endowed with an "intellectual perception" or "intuition" giving it access to the absolute nature of any phenomenon viewed in terms of substance. Although such access is not denied, it is not projected on this higher self, but found in the intimacy of the emptiness of all concepts and the direct experience of the original and very subtle level of selfless awakening which is the mind's deepest potential or generative capacity, the mind of Clear Light*. Neither is the higher self rejected, but found to be a less complex mode of cognitive functioning. When nondual cognition dawns, this self looses its "ontic" grip and transforms into a truly transparant higher self acting as a bridge between the non-conceptual and the conceptual, between the formless and form. Access to this pinnacle of cognition cannot be given by conceptuality, even not by creative hyper-concepts. The latter only lead to the idea of an Author of the world, not to a transcendent Creator-God. The higher self merely produces a series of totalizing creative concepts enabling the integration of a vast set of views concerning the objective & subjective sides of the world of actual occasions. It is the centre of awareness apprehending the limit-ideas & hyper-concepts of the creative mode of cognition. It is constantly invited to step beyond certain thresholds and usually does -as long as its gigantic reifications endure- contaminate its "natural" context. This reification is the source of its tragi-comedy. Displaying a whole range of meaningful (semantic) presences like signals, icons & symbols, the interdependent consciousness of sensate, cognitive, affective & actional experiences synthesizes an inner, panoramic perspective. This is grasped by the "I am" or the higher self of creative thought, transforming, through its inner vision, the dual tension of formal & critical conceptuality into the hyper-conceptual experience of life as a single meaningful conscious event hic et nunc, for me. Creative thought is the optimalization of : • self-reflection, or the inner dimension of the higher self ; • free thought, acting on the human right to exhaust its potential as an autarchic individual ; • encompassing finitude and a panoramic, overlooking view (completing immanent metaphysics). Although the higher self is untestable but arguable, its presence is undeniable in an existential sense. Most human beings needs to invoke a sense of "I am" to be able to exist. Thanks to this creative operator, a series of totalizing, unconditional thoughts or hyper-concepts is designated. These are apprehended by the self and are part of its ongoing "making of the mandala". They are sublime, imaginal, artistic constructions of the mind, and seem limitless, substantial & permanent. They occupy the end of finitude, and define the borders of the ontic subjectivity at work in immanent metaphysics. They are the illusions necessary to keep conventional reality going. Immanent metaphysics retains the division between object and subject. It may reify this on purpose, "as if". The former being a totalized picture of the outer world and the latter an inner mandala having the higher self in its centre, or an elliptical consciousness with two foci of I-ness : one empirical ego and another trans-empirical higher self. Ultimate logic is the reason why any claim of inherent existence is void. In their practice of knowledge-production, scientist, for reasons of methodology, adopt a realist or idealist stance. So to grasp the total picture and view the world, pre-critical immanent metaphysics posits a real world "out there" and a supermind "in here" or "up there". While both are illusions, they merely help to totalize all possible phenomena. When, under ultimate analysis, these are finally unmasked, all reified concepts are burned in a single "prise de conscience". Thus liberated from afflictions, the mind is ready to awaken to its original state (with the higher self being a true process self). The own-self is realized in five stages : • building : on the basis of the super-ego, the "summum bonum" invented by the empirical ego, a total & totalizing icon or "Gestalt" is generated. It is comprised of sense data, consciousness, cognition, affection and action. This is a vibrant grand picture, a sublime summary or "mandala" of what the empirical ego is able to perceive as its own ultimate constructive self-representation. This stage is purely empirical and does not escape the confines of the formal & the critical modes of cognition ; • concentrating : once the mandala made, prolonged concentration on it decenters the ego, and "purifies" all which does not belong to the mandala, allowing the ego to take on the form of its own ideal, and distinguish itself clearly from its negative, the Shadow (cf. Jung). This form is not yet the higher self, but a ladder to the plane of creative thought ; • becoming : insofar as the mandala indeed represents the best the empirical ego is capable of, this vast representation is internalized and perceived "from within". Instead of visualizing the mandala "before" the ego, it is observed with "the eye of the mind" and realized as an inner object of consciousness. When this happens, the mandala, or visualized correct self-knowledge, is seen from within, with the direct experience of I-ness, of "my" soul or self placed at the center ; • actualizing : self-realization initiates the production of self-ideas, more than a projection of the super-ego, but the living experience of an individual, historical being experiencing itself directly as an inherent self witnessing (integrating) all empirical & mental states of consciousness in its being-there ("Dasein"). The higher self is still an ontic (own) self ; • annihilating : the last stage of the higher self is the end of its reification, namely when its own root is directly discovered as the nondual light of consciousness, the natural state of the mind, the mind of Clear Light*. When the subtle illusion involved is pierced, the ontic self is destroyed insofar as it was an ontological illusion. No longer a someone-on-its-own, the individual becomes a fully participating, dependent & awakened. The higher self transforms into the transparant selflessness of awakening. When the reality & ideality of the world have thus been totalized, the present immanent metaphysics poetically posits the "optimum" of limit-concepts : the Anima Mundi or "soul of the world". This is the "form" of the world, its entelechy, and one being with it. As a "feminine", receptive principle (linked with the double movement of inspiration & expiration from the world-ground), She is wholly "of the world", not a transcendent Creator outside the totality of actual occasions. Her immanence mirrors the pataphysical, the hidden Divine of the world. But as She only brings into actuality what is potential, She is the entelechy of the universe itself and does not transgress its boundaries. In all points of the universe, She encompasses everything all the time. In process thought, She is the immanent way God* deals with the world. Immanent metaphysics, arguing the existence of this Great Soul and focusing on its conservative and designing nature, cannot explain Her, except if reference is made to the world as a whole, and nothing more. God* as the immanent Divine present with all actualities God* as the transcendent "Lord of Possibilities" Finally, after all these speculative efforts, immanent metaphysics prepares the end of reifying conceptuality and, by way of an ultimate analysis, undermines the affirmation of the substantiality of all phenomena. This is the purification of the conceptual mind. A. The Limit-Concepts of Reason. § 1 Finite Series and the Infinite. α. In mathematic, limit L is value V a function or a sequence "approaches" as the input or index I approaches some value. I and V may be finite or infinite. When V tends towards infinity (because I approaches zero or infinity), a point at infinity is approached. And endless, asymptotic increase is given, but an actual value is never defined. This is a point at infinity, not an actual infinity (an infinite value actually part of the set of real numbers). It merely acts as an indicator of a point transcending every possible sequence of numbers or functions between quantities and their momentum. α.1 Likewise, in transfinite calculus, infinite numbers like Aleph0 and Aleph1 are not Absolutely Infinity (or "Omega"). These numbers are the rungs of the ladder of infinity. These are transfinite numbers belonging to the transfinite set of actual infinities. α.2 The ultimate, absolute nature of phenomena, the absolutely infinite, is the ineffable object of transcendent metaphysics. All other relative infinities are contingent and so limit-concepts, returning back to the world and thus constituting its periphery. β. For Kant, the category of the ultimate called "God" is derived from the category of relation. The interconnectedness of the manifold cannot be denied. It evidences architectonic unity & scope. This leads to the limit-concept of the Architect of the World or an Anima Mundi, not to a transcendent Caesar-God. Nothing conceptual warrants such a move. Stepping outside possible experience, we transgress the conditions of conceptual knowledge. - Kant, I. : CPR, B350. γ. Those who devised apologies for their version of the singular theist God all made the same mistake : they objectified beyond all possible experience the unconditional unity of all possible predicates, filled the gap "per revelatio", passed beyond the conditioned, and inevitably ended their legitimate rational quest for the most perfect being ("ens perfectissimum") by affirming a hypostatized "ens realissimum". While the former is a possible concept, the latter reification is a transgression. Conceptual reason is not equipped to cross the borderline of the world. Conventionality is all it has. It must settle with that. It cannot move outside the world and experience it like any other object. Transcendence is not outside the world, but the same world observed without conceptual elaborations. The old Platonic topological view must be abandoned and replaced by an ontology based on the notion of a universal dynamical flow (of matter, information & consciousness). δ. Totality of immanence and infinity of transcendence are the two major leading ideas of metaphysics. Totality, as a limit-concept, aims at all possible actual occasions, the complete & full apprehension of the world. Infinity, as a transcendent signifier, does not border totality, as in the topological view, but penetrates totality. In fact, in every single moment, totality and infinity happen simultaneously. When duality turns absolute, nonduality ensues. Immanent metaphysics encompasses all possible actual occasions, i.e. all spatiotemporal building-blocks of the known. An ongoing series X (for example x = 1, 2, 3 ...) is not stopped ad hoc and only the limit of this series -with x going towards ∞- is accepted as a point at infinity, suggestive of the "periphery" of the immanent sphere of observation. This is not an actual infinity, nor an infinite number part of the sphere of observation. Aggregates of actual occasions are finite series, but entering the togetherness of other aggregates, they eventually merge in the quasi-infinite series of the huge sea of process, the vast ongoing architectures (with their differential formulas) of momenta of all kinds hic et nunc ! Attributing qualities to this point at infinity, we violate the principle of immanence. The sum of this quasi-infinite sequence of expressions can never be made, for the series continues to accumulate endlessly. This is what is meant by the "totality" of the world as a system. The moment we end the ongoing accumulation of the quasi-infinite set of mundane happenings, and posit an actual infinite, then a transgression has taken place. Concepts cannot enter the non-conceptual. The fire of the highest mode of cognition cannot be stolen. Promethean zeal only ends in eternalism (positing substances) or ontological negationism (positing the absence of order in the ongoing processes of the world). Despite objective & subjective transgressions, transcendent metaphysics accepts infinite objects. Emptied by ultimate logic and inspired by transfinite calculus, these objects are the absolute sufficient ground of the totality. They are the infinite embracing totality, the infinite piercing the finite, moving along with the finite, and thereby bringing paradox, bewilderment & wonder. This is the perplexity of the rational mind before the original light of the mind as directly experienced in the nondual mode of cognition. α. Before Kant, substantialism was axiomatic. The existence of a self- sufficient, fundamental ground is discussed but not truly questioned. Metaphysics is substance-based. What makes the beings be ? "To ti ên einai", literally, the "what it was to be" is primordial substance ("ousia"), an "hypostasis" or "hypokeimenon", an underlying thing.  Process, movement, change, motion & transformation are accidental and supposed not to affect the essence ("eidos") of this self-powered ultimate being ("causa sui"), this "substance of substances" sustaining the being of the beings, offering them their permanency. All things merely participate in this ground. Find this permanent,  eternal, unchanging thing-of-things and all the rest is supposed to follow ... β. To eliminate the root-cause of the ontological illusion, Kant attacked the three main substances thematized before him : (a) the soul : interacting with extension, the "res cognitans" is rightly given a distinct main role to play, not one of merely being an auxiliary of "res extensa" or God, but is defined as a substance with inherent properties ; (b) the world : the extended is filled with matter and movement, the basic ingredients of physical objects, but their are defined as independent and separate from each other, operating in absolute space and absolute time as a gigantic clockwork, self-powered & "out there" ; (c) God : the transcendent absolute as defined by fundamental theology fails in logic. Affirming the substantial God of theism is stepping outside the boundaries of conventional experience, and doing so is ineffable for non-conceptual. While "Credo quia absurdum" may inspire believers, it cannot satisfy philosophers. An understanding of God* beyond intellectual embarrassment is called for. This begins by grasping the conservation of the world, its design and the possible existence of its Author, the Grand Architect or "ens perfectissimum", a most perfect being. The latter being merely the immanent aspect of God*. A right view being the first step, one would expect the age of mere faith to soon be over ! γ. These transgressions, typical for essentialism, lead to antinomies & paralogisms. A variety of ontologies ensue. In each, the finite series did not remain continuous. Its continuation was aborted ad hoc and made static by the axiomatic affirmation of an autarchic, inherently existing substantial self-sufficient ground or underlying (absolute) eternal thing. A logically unacceptable jump from the finite to the infinite order was made. The argument fails. Criticism understands the Ideal, the Real and the absolute from another vantage point. No longer seeking the static, eternal core, one focuses on the dynamic stream of interconnectedness between all things. The study of and meditation on this stream reveals the ultimate nature of all phenomena not to be outside or beyond this stream, but precisely this stream experienced as empty of an own-self or substantial core. Knowing something is an illusion should prompt us not to be fooled again. Understanding the soul, the world and God, and in that order, reflects the totalizing intention of metaphysics. But since the Greeks, it has never been bridled by our understanding of the limitations of conventional knowledge in general and of the conceptual modes of cognition in particular. Pre-Kantian philosophy embraced substantialism and concept-realism. Immanent metaphysics studies the objective (with as limit the Real) and the subjective (with as limit the Ideal) aspects or modes of all actual occasions. The latter are fundamental, for shared by both object & subject. With "soul" is meant the conscious observer, meaning-giver or sign-user, making choices and changing things. With "world" is meant all actual occasions happening at moment "t". With "world-system" is meant the totality of this world and the world-ground. With God* is meant the only abstract actual occasion bridging all possible potentiality and all spatiotemporal actuality. These are ontological objects, but process-based. Not a single objective or subjective own-thing can be found, can it not ? Find a substance with inhering properties and erect a substance-based ontology to see this fine structure demolished by ultimate logic. § 3 The Copernican Revolution. α. In the first predictive mathematical model of Heliocentrism, Copernicus  understood why the Earth had to turn around the Sun and not the other way round as his learned contemporaries believed. Neither did they grasp why the Earth would spin at all ! Heliocentrism ended the elect role of humanity in the worldview advocated by the "religions of the book". Before Copernicus, most scientists, loyal to the Hellenism of Late Antiquity, adopted a geocentric view of the world. The complex geocentric model of Ptolemy worked, so why abandon it ? Heliocentrism had been proposed by Aristarchus of Samos in the 3th century BCE, but this valid view had been put aside. Why ? Because he had retained circular orbits. Copernicus too was unable to let go of this. β. What was a solid geo-ontological ground, namely the Earth as the objective centre of the cosmos of the Caesar-God, became a mere point of view among many others. Decentration of objectivity invited a turning around to counter the crisis caused. This led to grounding the importance of subjectivity in nothing else but subjectivity (the Cartesian ego had indeed remained essentialist). It also invited intersubjectivity and the revolutions intended to achieve social justice (cf. 1789 & 1917). Indeed, the Solar kings -assumed to have received their crown by the God of revelation- were dethroned. β.1 We need to realize each observer (the Earth) knows reality (the Sun) from a unique & singular point of observation. The fact our relative point of view cannot be escaped points to the importance of subjectivity, of the knower in the act of cognition. Along with the known and knowledge, the knower is a necessary, co-relative but independent element of the process of producing knowledge. The knower is at the centre, not the absolute ground. This is a paradigm shift away from autarchic objects and figures of authority to a reflection upon the conditions & possibilities of selfhood, no longer viewed as a substantial, eternal (immortal) soul. β.2 The knower is no longer a passive gatherer but a creative participator. Shaping its own world, humanity can do nothing less than take up full personal responsibility for what happens on planet Earth. Only a global political system is able to solve problems, for nationalism will fail. The time of independent nation-states is over. The Copernican Revolution is the realization every moment each observer occupies a unique vantage point. γ. While we observe the Sun rising and setting, we know this is due to the rotation of the Earth. Our understanding of the phenomenon does not change our observation. Likewise, we may see the disk of the Moon change, while astronomy tells us otherwise. So also in epistemology. γ.1 The transcendental apparatus is not a property of the known, but a functional characteristic of the knower. The subject seems passive while it is actively designating & attributing labels to objects. We do not grasp perceptions, but sensations, and the latter are already interpretations of perceptions. γ.2 In ultimate analysis, the "Via Regia" to the end of reifying conceptual elaboration, the same reversal is found. While we observe conventional objects to exist independently & separately, both logic & physics teach they are, at the most fundamental level, dependent and non-local. They appear at the explicate level as solid & permanent, but are in fact vacuous & constantly changing. δ. Illusion is precisely this : appearing otherwise than what truly is the case. All conventional knowledge is such an illusion. Valid conventional knowledge (science & immanent metaphysics) are merely sophisticated formats of the valid but mistaken appearance of disconnectedness. Like galaxies, Solar systems, planets, mountains & large monuments, valid conventional knowledge may last for a very long time, but cease it eventually will. As creative advance is ongoing, what we know constantly changes. Any kind of institutionalization fails to follow the tide. A lost battle against cultural lag animates the ranks of the academia. Eventually, even the Himalayas will crumble to dust. Conventional knowledge is valid but mistaken. Absolute knowledge is beyond validation and unmistaken. The former is outspoken, the latter silent. The observer is not merely a "passive intellect" taking in sense-data and organizing them post factum by way of an "active intellect". Observation takes place in an already established framework of names, labels & identifications (negations). The latter is the outcome of the slow complexification of our cognitive texture, expressing itself in various modes and, in due course, establishing various mental operators. What was initiated as coordinations of movements, becomes internalized functional processes and various "intelligences". The object of knowledge is not "naked" as naive realism wants it, but the end result of interpretations made by the cognitive apparatus. While Kant supposed these interpretative structures were universal, and part of them are, the idiosyncrasies of observation are noteworthy. The Copernican Revolution is a decentration. § 4 The Linguistic Turn. α. Having accepted the importance of the knower, we focus on our human capacity for language, i.e. the meaningful manipulation of signs like signals, icons & symbols, to impute, designate, label or attribute. In the process of this differentiation undergone by our cognitive texture, generating the conceptual mind calls for semiotical functions. They are crucial in placing labels and in identifying sensate & mental objects. They are communicated to the intersubjective milieu of sign-interpreters. β. The actual use of languages defines a set called "userware", operating in immediate, mediate and general contexts. This is a kaleidoscope of choices, but also a calendar fixing an itinerary & the rites of passage undertaken by humanity in its sentient evolution. In the most general way, this set brings together all possible sentient activity at work in the world as a whole hic et nunc, including the infinitesimal possibility of sentience potential in every actual occasion, the building-block of ontology. γ. Insofar as the object of immanent metaphysics is concerned, the world is the totality of all actual occasions taking place in a single moment of the existence of the universe, encompassing all actual occasions ; material, informational and sentient. The world-system may generate countless consecutive worlds. The "breath of the worlds" being the flux of the ongoing arising, abiding, ceasing and re-emerging of the world out of its ground. δ. The subject of knowledge claims the object by naming. Designating labels, imputing fixed characteristics, properties & relations, this conceptual, conventional knowledge -if valid- is the right tool to solve functional problems dealing with activities involving instruments, strategy or communication. But this relative knowledge is not absolute and so mistaken. δ.1 The impure, reifying conceptual mind posits an independent, separate & substantial object, superimposes a category-mistake upon perception & reason. δ.2 If a substance-based object is found, can it be anything less than massive ? It must therefore be easy to ostentatiously identify such a substance, self-sufficient ground or essence, must is not ? If we check all the rooms of the house for the presence of this hippopotamus and after thorough investigation none is found, then may one not safely assume there is no hippopotamus in the house ? Perhaps not for the full measure, but surely very likely enough. Only in our imagination do hippopotami become invisible ... ε. The Linguistic Turn is a deepening of the Copernican Revolution. The latter argues the necessity of the observer, the former creativity & awareness.  Indeed, now sentience itself, the consciousness of the observer, becomes the crucial symbolizing part of the process of acquiring knowledge, as it were emancipating the self-reflective activity of the "ego cogitans" begun by Descartes. In this self-reflection, conscious critical awareness and the production of symbols (leading up to Artificial Intelligence) are integrated. The meaning of language depends on how its signs, the units of meaning, are consciously manipulated. Meaning-shifts happen constantly and only by repetitive use can certain glyphs (or well-formed, meaningful states of matter) endure over longer periods of time. Thus turned into cultural objects, they face the rise, fall & rebirth of civilizations. Creativity (novelty) and symbol-production move with conscious intention, choice, meaning, sense, sentience and functional activities involving sensation, volition, emotion and thought. Understanding the importance of signals (waymarks), icons (meaningful images) and symbols (denotative & connotative referents) results from placing the subject of knowledge at the centre. The knower grasps or possesses the object and signs as outer manifestations of this mental apprehension. They allow this to be communicated to the milieu and add objects to the domain of information. The latter is comprised of natural and artificial data. Insofar as these conditions & determinations reflect the architecture of the cosmos and life, "natural" software is at hand. Thanks to the sentient activity of humanity, cultural objects are added, and these are merely artificial designs put in by the creativity of Homo sapiens sapiens. α. The two regulative ideas of transcendental reason established by the critical mode of cognitive activity are derived from the two sides of the Factum Rationis pointed out by transcendental logic ; the condition of objectivity, implying thinking must imply the extra-mental, and the condition of subjectivity, implying one cannot eliminate the thinker and intersubjective communication. These ideas, called "the Real" and "the Ideal" respectively, do not constitute the objects known, but merely regulate the cognitive activities associated with the pursuit of objectivity and with mental clarity, acuity, focus and sense of truth respectively. β. "Extra-mental" means the object of knowledge must be considered as a separate, independent entity on its own, i.e. some thing "out there". If a reasonable account of the possibility & production of knowledge is to be made, conventional knowledge must imply this. β.1 Science must -a priori and methodologically- consider the reality of the object of knowledge as if representing absolute reality. Suppose this is not the case. Then scientific knowledge is never about some thing, but merely represents the objects of intersubjective consensus. β.2 So even the "statute-law" of theoretical epistemology provides one must accept facts to possess a theory-transcendent facet. This necessity shows how valid conventional knowledge cannot operate without the possibility of substantial instantiation or reification. β.3 The purification of the conceptual mind, or the end of reifying concepts, is a special mind. Science perfectly functions without it, but metaphysics -if it wants to delve deeper than the world- cannot. The temple of transcendence can only be trod by this purified mind. Theoretical epistemology stops the reification of the conditions of knowledge (the reification of the ideas of the Real & the Ideal), but it must accept facts carry the weight of the absolute, even if this would not be the case ! Practical epistemology introduces the "as if" mentality, substantializing idealism and realism for methodological reasons. γ. Thinking the thinker implies the subject of knowledge must be grasped as a transcendental "I think" for all times, the capstone of a cognitive system in three stages and seven modes. In order to guarantee the unity of the manifold of objects apprehended by the knower, this formal focus necessarily accompanies every cogitation of the empirical ego. It is independent & separate from it and is a formal principle posited by necessity. Hence, even transcendental thinking, purged from essentialism, must accept this subject of subjects, a desubstantialized absolute ideality. Creative thought turns this formal self into an ontic own-self. 1 Mythical libidinal ego 2 Pre-rational tribal ego 3 Proto-rational imitative ego barrier between instinct and reason 4 Rational formal ego 5 Critical formal self barrier between reason and intuition nondual selfless (transparant) self δ. Because at this critical level of thought unsolved tensions and delusions remain, the creativity of the higher self is necessary. The appearance of the Clear Light* is the outcome of the final purging of reification, leading to the selflessness-in-prehension, the end of self-cherishing and self-grasping in its coarse & subtle forms. Annihilating this ontic self brings about the transparant selflessness of awakening. The operations of the conceptualizing mind are regulated by the ideas of reason. These limit-concepts tend towards the optimalization of valid conventional knowledge. The idea of the Real regulates by presenting the correspondence of valid conceptual knowledge with absolute reality, the idea of the Ideal by bringing the "consensus omnium" of all sign-interpreters to the fore. They merge and form a point at infinity, a "focus imaginarius" never itself conceptually known. These ideas are not transcendent, but the transcendental conditions of objectivity & subjectivity respectively. α. The objective & subjective sides of the Factum Rationis -ruling all possible cognitive activity- are self-evident, and by necessity regulated by the ideas of the Real and the Ideal respectively. Likewise, the object of immanent metaphysics, namely the world, also evidences objective and subjective limit-concepts. Only the object of transcendent metaphysics, namely the world-ground, is beyond limit-concepts. As conceptualization stops, signs attempting to grasp this a forteriori imply paradox and inconsistency. Transfinite calculus, advancing actual infinities, although indicative, cannot bridge this and so remains inconclusive. Can one therefore speculate on the transcendent ? β. The world is a sea of actual occasions acting out matter, information and consciousness, the three fundamental aspects of every single momentary actual occasion. The question arises : how is this order possible ? Ignoring the extraordinary radiant brilliance of this dynamical architecture, even over very large periods of time, is inept. Moreover, mere stochastic views run against the high unlikelihood of the parameters of this cosmos, with its life & sentience ... β.1 To call for a transcendent cause to explain the world is going too far. Logic forbids the direct, uncritical use of an absolute self-sufficient hypostasis, signals the use of a transcendent signifier, and deconstructs it. Transcendence posits at hoc an end to the endless progression deemed possible by the immanent view. Indeed, the actual finitude of the world cannot be demonstrated (while its quasi-finitude may be accepted). Neither should the possibility of an infinite series be rejected beforehand. β.2 A transcendent & infinite absolute towering above a finite world, a Pharaonic "substance of substances", cannot be posited without logical problems. Even non-substantial, process-based speculations about infinity are not without paraconsistency. The non-conceptual cannot be grasped by any concept. Indirectly, poetry may translate this direct awakened experience of the world in the nondual mode of cognition. If this is the case, then a hermeneutics of the signs used by mystics is possible. γ. To be rationally established, the order of the world does not need a transcendent cause. This was proven by Ockham. γ.1 To avoid any problem with the infinite ingress in time of the horizontal series of interacting and interdependent efficient causes, jump to the actual, vertical order of events hic et nunc. So not as they are happening in the horizontal, temporal, functional, physical order, but as they are happening in every succeeding moment. By doing so, one always avoids an infinite regress. Is it not a solid axiom to affirm the world is not infinite in each actual moment ? If not, how to avoid blatant absurdities ? γ.2 The revised a posteriori argument from efficient causes : Case to be proven : "A first conserving cause exists." • Major Premise : in the contingent order of the world, nothing can be the cause of itself or it would exist before itself ; • Minor Premise 1 : an infinite series is conceivable in the case of efficient causes (existing horizontally one after the other), but very unlikely in the actual (vertical) order of conservation "hic et nunc" ; • Minor Premise 2 : an infinite regress in the actual, empirical world hic et nunc would give an actual infinity, leading to absurdities like being born before one's own mother ; • Minor Premise 3 : a contingent thing coming into being is conserved in being as long as it exists or abides - being contingent and so impermanent it eventually ceases ; • Conclusion 1 : ergo, as there is no infinite number of actual conservers, there is a first conserver ; • Lemma : if we suppose an infinite regress in the actual, empirical world hic et nunc, then an actual infinity would exist, leading to absurdity ; • Ergo, at least a first conserving cause exists. QED γ.3 The (supposed) finite order of the world of contingent actual occasions cannot be conserved without a first conserver. Thinking an actual infinity may and often does lead to rationally unacceptable inconsistencies. δ. The argument from design runs as follows : Case to be proven : "The world has an intelligent, proximate cause." • Major Premise 1 : the world is an organized, contingent whole, evidencing variety, order, fitness & beauty ; • Major Premise 2 : it is impossible for this arrangement to be inherent in the things existing in the world for the different entities could never spontaneously co-operate towards definite aims, not even over very long periods of time ; • Minor Premise : definite aims need a selecting and arranging purposeful rational disposing principle ; • Ergo, the world has an intelligent, proximate cause. QED δ.1 For Kant, the argument from design led to the "stage of admiration" of the greatness, the intelligence and the power of the Architect of the World, who is indeed very much restricted by the creativity of the stuff with which to work. And this unlike the Creator-God of monotheism, who as an Author, both self-sufficient, necessary and transcendent, can do whatever He likes to change things immediately ! δ.2 This Architect of the World, "God of the philosophers" or God* is not omnipotent, neither powerless. Omniscient of what happened and what is happening now, not of what will happen in the future, this Anima Mundi or entelechy of the world is receptive and generative of order ... But perhaps also of orders inimical to life & sentience itself, pre-crystaline architectures close to the seminal state of the world. δ.3 Understand the order and beauty of the world points to a final end, namely to actualize all its possibilities, itself an ongoing, endless process regulated by limit-concepts. The conserving "soul of the world", or intelligent proximate cause of the world, does not transgress the boundaries of the world. δ.4 In all points of the world (both momentarily as temporally), this Architect, Great Soul or Great Mother encompasses everything all the time, keeping all actual occasions in her fold, passing by each single one of them. ∫ Seek to affirm conservation and (intelligent) design in harmony with the Big Bang, relativity, quantum, chaos & natural selection. ε. On the subjective side, the world displays subtler (deeper/higher) levels of consciousness. The empirical ego observes the display of sensate and mental objects it possesses on the surface of the "mirror of the mind", in other words, as part of the circular field of consciousness with this ego at the centre. This is the coarse, empirical mind. ε.1 This coarse mind receives five sensate objects and identifies them by imputing conceptual labels & names on them. The five sense-consciousnesses associated with them can be established by this conceptualizing mind as long as (a) the sensitive surfaces of the healthy sense organs receive stimuli, (b) these inputs are properly decoded and transferred to the thalamus and (c) the thalamus projects this afferent information on a well-functioning neo-cortex. ε.2 The coarse mind also possesses mental objects. These are used to communicate information with other minds and label sensate objects. The ontic ego has a strong sense of inherent identity, with feelings of autarchy and an innate freedom of choice. It seems to exist separately and independently. It is a special mental object, namely a sentient one, a consciousness displaying emotional states, intentions, thought and self-consciousness. ε.3 Given the empirical ego is the root of the direct experience of sensate & mental objects and also the origin of conceptualization, naming & labelling, the realization of its impermanence is crucial to make it pliant enough to establish the subtle mind. Because the magnificent, sublime & blissful character of the subtle mind leads to the subtle delusion of identifying it as a higher, eternal self (a new ontic, own-self), unsolved tensions remain. This subtle mind, established by observing the insubstantiality of the coarse mind, also needs to be totally desubstantialized, leading to the higher self and then to the selfless transparancy of the mind of Clear Light*. ζ. The subtle mind no longer establishes the inherent, substantial ego based on sensate and mental objects. To observe the lack of inherent properties in the subtle mind and the three root-causes of all conceptual activity properly prepares -so transcendent metaphysics claims- the awakening of the mind of Clear Light*. This is the original, natural state of the mind, the very subtle mind or fundamental stratum or layer of mind. But insofar as immanent metaphysics is concerned, this ultimate mind*, based on an ineffable but actual nondual experience, can be nothing more than a limit-concept. Only full-emptiness, the union of bliss and wisdom endures. Immanent metaphysics should not posit an absolute entity, Deity or Supreme Being outside or behind the world. Theology should abandon Platonic topology to convey transcendence. Outside the world, this "Urgrund" or Unmoved Mover is a forteriori something radically different from creation. Hard to imagine how such a Being would communicate with the world. Insofar as the Architect of the World remains part of the world, immanence prevails. Immanent metaphysics (backed by valid argumentation) can go no further. Sublime poetry, but this falls outside philosophy, may inspire a hermeneutics of salvic poetic signs. Positing a transcendent Being feeds the illusion of a self-sufficient ground. The Architect of the World, the immanent approach of the world by God*, is not a creator "thinking" the world before its incipience, fashioning it as it were "ex nihilo". The Architect of the World is not beyond the world but with every possible actual occasion. Transcendent metaphysics merely affirms a realm of sheer potentiality, but this is not to be confounded with a theo-ontological, self-sufficient Absolute Being or Creator-God. Such a "God-as-Caesar" is not found to exist. This makes one ask what kind of God* process metaphysics does envisage ? Subjectively, another limit-concept is introduced. The unity of conscious experience cannot be explained by the coarse mind. Formally, as critical thought explains, this necessitates a formal self "for all times", one merely accompanying every cognitive act of the conceptual mind. A deeper stratum is reached as soon as the coarse mind is emptied of itself, i.e. of its own identification as a substantial, independent and separate entity. This identitylessness of persons leads to the formation of a new, higher focus of conscious awareness. At first, this focus grasps at itself and generates an ontic self (an eternal soul or "âtman"). While offering a panoramic perspective producing creative concepts and a cosmic awareness, the ontic self does not exist from its own side. Once this is thoroughly realized, the subtle mind is no longer caught in its subtle delusions and, in the poetical language of the mystics, the Clear Light* of the original mind or very subtle awakened mind shines through. B. Diversity & Convergence in the World. α. Considering the mundus, the horizon represents the ongoing complexification of all actual occasions, events & entities part of the world and distributed over the four cardinal directions. These are not only constantly interconnected, but also enter each other's history and therefore shape the fabric of an organic togetherness based on creative advance. The manifold, or the world disjunctively, is a sea of process. β. The horizontal plane displays diversity, variety, multiplicity and differentiation. On an explicate level, this manifests as the vastness of physical space and the nearly endless temporal flow of events taking place somewhere. On the implicate level, this is the universal quantum plasma connecting all momentary actual occasions. β.1 The ultimate or primordial ground of the world or world-ground is not a substantial Real-Ideal underlying all actual occasions, but a realm of pure possibility, of formative abstracts covering what is needed for the next moment of the world to happen. The world-ground is the sufficient ground of the world, but not a substantial self-sufficient one. β.2 World and world-ground constitute the world-system. The ground of the world is the potential out of which all possible actual occasions constantly emerge, eventually return to and reemerge from. γ. The temporal, sequential & efficient togetherness of actual occasions and their aggregates also happens horizontally. Efficient determination is the direct physical impact of actual occasion A on actual occasion B. If this temporal "flow" would be the sole determining factor of the togetherness, materialism ensues. But then no creative advance would be possible. Adding architecture & sentience makes diversity possible. The world is a set of actual occasions. These feature a temporal stream of interconnected moments. All possible interconnections fall into different categories of determination or lawful contact between actual occasions like causality, interaction, statistical correlation, etc. These determination & conditions contribute to the diversity of the world and are called "horizontal" because they all invite a succession of states or moments of existence. All these are instances of efficient determination, or the determination between actual occasions on the basis of their functions & temporality. If only efficient determination would rule the world, no creative advance would be possible, for actual occasions would by themselves add nothing to the succession of happenings. The universe would be "dead bones", nothing but a "nature morte" of elements. This is clearly not the case. Science teaches the well-formed nature of the choice of natural constants and lawful activity in the physical universe. The laws of Nature suggest an immanent "logos" thinking these architectures. α. Again, in the mundus, the prime vertical represents the continuous complexification towards unity, from hidden & simple ("nadir") to overt & complex ("zenith"). This coming out into the light of unity-out-of-diversity, heralds the return of the world to its original singularity, to its last expiration (or evaporation) at the end. Because of final determination, the manifold becomes the one actual occasion, the world conjunctively, an organic sea of process. This results from convergence between societies of actual occasions, an attunement of their participations in each other and the establishment of a cosmic participation throughout the members of the world. β. The organicity of the world is the case not only thanks to the (material) temporality of efficient, physical connectivity and interdependence, but also because of the ongoing informational and sentient activities of conservation, design & Clear Light*. β.1 The material aspect, defining the horizontal plane, is -at every moment- indeed crossed by a non-material aspect at its vertical, an intelligent focus or "vis a tergo" reorganizing the probabilities of materiality and thus indirectly co-directing the material manifestation of particles & fields. The total available information provides the "mandala" of choices manipulated by sentient decisions. The latter, ex hypothesi, alter the structure of the probability-fields ruling material manifestation (cf. the collapse of the wave-equation of Schrödinger). β.2 Of course, this vertical co-direction is hampered by the free choices of all other actual occasions. γ. Information & consciousness define intelligent focus, or the combined activities of totalization, generalization, overview & sentience characterizing final determination. The teleology of the mundus fosters unity & the largest possible harmony. The vertical adjustment, balancing or finalization by any actual occasion enters and influences the efficient stream of the next. Thus efficient & final determination cooperate in every single instance of the mundus. δ. In direct nondual experience, the very subtle mind of Clear Light* finds itself inseparable from the world-ground, the absolute ground of pure potentiality. The unity of all possible minds of Clear Light* or the prehension by a single supermind of the world-system as a whole is called "the primordial mind", "Âdi-Buddha" or "the mind of God*", the ultimate, omniscient, total & infinite prehension of the momentary. ∫ Of what cannot be conceptualized, only melody can speak. Besides efficient determination, the mundus also features finality. This means the unity, creative advance and harmony between the various efficient characteristics point to a singularity, namely the world as a unity, a whole, a "mandala" of actual occasions. This is not merely a compound of disparate elements, but an organic unity consisting of all possible actual occasions. This engages the most comprehensive form of participation ; the unity & harmony of the manifold as apprehended by intellectual focus. This is a unity conscious if itself, i.e. the unique society of societies of actual occasions. Thus the world displays material efficiency hand in hand with informational organization (architecture) and the results of sentient, conscious choice. The latter two define its final determination, adjusting the horizontal flow of functional efficiencies by altering the structure of the propensities involved in the process of material manifestation. Finality, involving unity & harmony, emerges together with the conservation and the design of the world. This calls for God*, a supermind imputing its superobject and apprehending the world-system as a whole, i.e. the potential world-ground and the actual world. Immanent metaphysics cannot move further and -in the context of a process metaphysics- merely points to the transcendent signifier as a category of potentiality, virtuality, possibility (emptiness) and its simultaneous manifestation as a vast network of interconnected actual occasions (fullness). But such a possible Grand Architect is never an Author, not a Caesar, nor a Creator. C. The Alliance between Science & Immanent Metaphysics. § 1 The Alliance of Form. α. Science produces valid empirico-formal propositions. These are necessarily statements referring to facts. Facts are valid but mistaken. Simultaneously, they are extra-mental and determined by mental objects. Because science works with propositions, it obeys formal logic. The latter defines the form of science. Of all logical operators, the negation is the most basic. Of all axioms, non-contradiction is the most elegant. β. Metaphysics argues a comprehensive view of the world. It does so in metaphysical systems integrating scientific knowledge and the history of speculative thought, if possible world-wide. Because it is argumentative, it presents an organized, architectonic mental object. Having formal outlines, logic is implied. This is also the case for the procedure to settle arguments (the rules of argumentation). If metaphysics is contradictory and makes no efficient use of contradiction, it cannot be valid. The correctness or well-formedness of the argument is as crucial in science as it is in critical metaphysics. Logic is the corner-stone of both science and critical (immanent) metaphysics. By adopting certain rules conveying order and abstraction, an architecture ensues. Both disciplines focus on the world, science in detail, metaphysics in general terms. Accepting logic is to confirm that if arguments fail, the conditions of well-formedness have not been met. An incorrect form is being applied. Of course, logic also assumes a series of axioms, logical operators and rules of argumentation. One cannot change these at random, but decide beforehand what is going to be used. Organizing the field of logic, distinguish between formal, semantic and pragmatic logics. The first deal with the form of statements, and derives their truth-value on the basis of this alone, i.e. without taking contents into account. The second type is contents-based, using natural symbols (like cosmological or biological cycles and processes). The third type is used in certain practical contexts, like dialogue or argumentation. It is quite useless to apply formal rules to contents-based reasonings, or define the latter in terms of practical applications. Each type has its own domain and applies its own kind of rules. A variety of logics have ensued (non-formal, non-linear, quantum, etc.). § 2 The Alliance of Contents. α. Science solves problems and understand Nature in its diversity. Critical metaphysics totalizes Nature, understands the world insofar as the world goes and points to the transcendent world-ground understood as a process-based sheer potentiality. Sensate and mental objects are "natural", i.e. belong to Nature. Their horizontal aspect is their tendency to disperse their momentum, while their prime vertical triggers a balancing-out of extremes by altering the propensities ruling efficient states of matter, manipulating the virtual totality or set of "all possibilities" speculated to be present before any kind of actual manifestation, i.e. before the actual collapse of an infinite number of possibilities -the primordial sense of matter, information & consciousness- to a single actual occasion hic et nunc. β. Science and immanent metaphysics are natural allies. Their aim is to understand Nature, the world. But this alliance is conditional. On the one hand, immanent metaphysics must acquire sufficient information before starting to speculate about a "mandala" or totality. In terms of the current scientific paradigm, it must accept three fundamental facts : (a) the origin of the observable universe in the Big Bang some 13.7 billion years ago, (b) a 4.6-billion-year-old Earth and (c) the evolution of life-forms by means of (neo-)Darwinian natural selection. On the other hand, science must keep out of metaphysics and leave speculative activity to philosophers. γ. Clearly science and transcendent metaphysics are not allies. A critical transcendent metaphysics posits a process-based, ultimate world-ground as inseparable from or in unity with the mind of Clear Light*. While this cannot be argued definitively (by valid conclusion or affirmative negation) and this direct experience of such a primordial unity or wholeness is non-conceptual and nondual, it is nevertheless a known, a datum of knowledge, part of a cognitive act. γ.1 This special experience & knowledge ("gnosis" or "prajñâ") or living mystical awareness & insight ("Da'at"), arising in the awakened ("bodhi") or ultimate, very subtle mind of Clear Light*, may be prepared by any pliant mind realizing the fruits of ultimate logic and hence purified from conceptual reification. As a direct experience and a cognitive act, it is nevertheless beyond validation and unmistaken. Beyond validation because it involves a profound, undeniable, more certain truth than any other truth or prior belief ; the ultimate Eureka ! or "Aha !"-experience ; but it is nameless. Unmistaken because it apprehends what is as it is, nothing more and nothing less, without any conceptual elaboration. γ.2 In this awakened mind, selflessness merely prehends its objects, conceptual & non-conceptual alike. If concepts arise, they are merely logical & functional entities, nothing more. The suchness of all phenomena is the thatness of their arising, abiding, ceasing and reemerging. The absolute mind only entertains the existential instantiation, attending the non-separability of fullness of togetherness and emptiness of own-nature, of compassion and wisdom, bliss and absence of inherent existence. Here the absolute nature of duality is directly experienced. Science and immanent metaphysics both focus on the world. The former seeks empirico-formal propositions about the manifold, while the latter articulates its speculative statements, aiming at a general perspective and the unity of the selfsame manifold. This is not a God's-eye viewpoint from outside the world, but a tangential appreciation of the whole. Both disciplines, when working together and not against each other, will enhance the production of knowledge and lead to a better appreciation of both the manifold and the unity of the world. The latter points to the activity of a higher intelligence, a Grand Architect of the World, designing & conserving the world-order. Either this, or a mathematical miracle explains what is at hand. This is not a Creator, for such a transcendent Being, posited as radically different from its creation, cannot be conceived without mystification, paradox and contradiction. However, transcendence can be conceived, but not in terms of an ontological difference, but as (a) an continuous process and (b) a sheer potentiality that just was, is and will be. The relation between the actual quasi-finite world and the pure, infinite possibility is not a causal one (for spacetime as physically conceived starts with the arrival of the cosmos with the Big Bang), but a holistic determination (the greater whole encompassing the lesser). § 3 Empirical Significance & Heuristic Relevance. α. To arrive at any scientific truth, i.e. a valid empirico-formal proposition in the realm of conventional, conceptual knowledge, significance is needed, implying the facts, results or data referred to by this truth are unlikely to have occurred by chance. Randomness is the non-order in a sequence of symbols or steps, a process lacking intelligible pattern(s) and their combinations. High, medium and low significance prevail. In this sense, on the scale of scientific truths, Schrödinger's wave-equation is the most significant. β. Significance covers the objective realm, but significant facts may have no relevance, i.e. subjective importance. Relevance is the relation of something to the matter at hand as viewed by subjective & intersubjective intent. Insignificant statements may be highly relevant. The concept of "intelligent design" as proposed by monotheist creationists is unscientific and insignificant. But to many communities of fundamentalists this idea or mental object is highly relevant. In the context of process metaphysics, intelligent design harmonizes with cosmology & evolution. Relevance cannot be "tested" but only argued. The most sophisticated system of answers wins the day. significant insignificant relevant science serving metaphysics of hope irrelevant science serving randomness, chaos γ. Because metaphysics is not testable but only arguable, it cannot produce significance. Scientific validity calls for both experimentation and argumentation leading up to theory-formation. The phrase "metaphysical experiment" involves a contradictio in terminis. So it follows all speculative inquiries done by theoretical philosophy are simultaneously insignificant and highly relevant. Metaphysics holds a very special place. As a heuristic of science, valid & critical theoretical philosophy is crucial in providing totalizing frameworks and in letting the scientists do they jobs, i.e. produce facts using tests & theories. Its insignificance is not factual, but the consequence of metaphysics being untestable. As soon as the philosopher becomes a scientist, inspiration vanishes. As soon as the scientist becomes a philosopher, subtlety is out. δ. Metaphysics articulates a totality. Critical process metaphysics grasps this as impermanent (dynamical) and interconnected. There is much hope in both. δ.1 Absence of permanence means all things can enter all things, for the absolute isolation given with the permanent thing is not present. This fluidity of the impermanent stream of actual occasions optimalizes the possibilities of change & transformation. The low can turn into the high and vice versa. Optimalizing duality, this extreme heralds the coming of that extreme. We are never stuck. δ.2 As all actual occasions are interconnected and produce novel togetherness, the singular ego has "a place to move to", namely to all those countless suffering others. ∫ A metaphysics of hope fosters unity & harmony. Non-substantial, unity is a perfect style of movement, whereas harmony is the cosmic law, "Maat" or "Dharma" ruling interconnectivity between all possible actual occasions, shaping negentropy, non-redundancy & reduced randomness. Scientific propositions are significant because they reflect the objective findings of the community of sign-interpreters. They may be relevant or not, i.e. appeal and be of (inter)subjective use. Metaphysical statements are not significant but not necessarily pre-Baconian, i.e. picturing the world we would like instead of the way science thinks it is. Immanent metaphysics stays near (or next to) the findings of science and tries to place these in a general picture. But valid metaphysics is highly relevant, allowing us to grasp the possible unity and harmony of the world. D. Limitations of a Possible Speculative Discourse. § 1 Logical Limitations. α. Because metaphysics cannot be tested, it must present strong arguments. But these are based on logic, involving certain choices like logical operators and rules of argumentation. These must be accepted beforehand. Formal and informal logics prevail. Although identity, non-contradiction and excluded third figure in most, this is not always the case (cf. paraconsistent logics and intuitive logics with included third). β. Any kind of arbitrariness forms a limitation. The validity of metaphysics cannot be absolute. Not only because new facts constantly emerge, but also because the axiomatic choices demanded by logic are (inter) subjective. Unlike science, metaphysics can never actually test its hypothesis. This is the unavoidable logical limitation of metaphysics. All conceptual elaborations are based on logic. Down the centuries Aristotelian logic (not unlike Euclidean geometry) has  been considered as the only possible way to establish the truth-value of statements. But just as Riemannian geometry showed two parallel lines indeed may intersect, non-formal logic and alternative formal logical theories provide evidence of the importance of establishing the logical rules to be applied beforehand. Certain phenomena investigated by science, like the particle/wave paradox or the superposition state of the wavefunction, defies the principle of non-contradiction deemed the cornerstone of correct thinking. Indeed, quantum logic calls for a different set of first principles and so cannot be approached with classical formal logic. These limitations apply to any kind of conceptual system and so in that respect, both science and metaphysics share the same limitation. § 2 Semantic Limitations. α. The contents of scientific knowledge is based on sensate & mental objects. The contents of metaphysics on mental objects only. There is no way to test speculative statements. Their relevance is heuristic, inspirational & inventive. The semantics of science leads to a better understanding of the manifold and so to technology. The semantics of metaphysics leads to an understanding of the whole based on speculative statements derived from the best of science and so able to inspire the latter. β. Creative concepts throw a vast number of meanings together, shaping powerful symbols. These ingredients of the grand story of the world-system are pertinent mental objects. The need of a critical metaphysics is most pressing here. No sufficient ground can be invoked. Mental objects are not inherently existing substances, possessing their properties from their own side, they are other-powered. This means their properties derive from the process of interdependence & wholeness, not from absolute isolation and autarchy. Past metaphysical system were substance-based, not process-based. They included the ontic ego and/or ontic (higher) self existing independently and separately. γ. A valid critical metaphysics works with the absence of sensate objects and the unwanted tendency to reify mental objects. Not a science, metaphysics is not bound by scientific (experimental) methodology. Theoretical philosophy is not to copy the ways of science. Remaining irreversibly interlinked, both are distinct domains of conventional knowledge, the one aiming at particularities, the other at generalities. The semantic limitations of science and metaphysics differ. The former are primarily defined by sensate objects. If all swans are deemed white, the discovery of a black swan indeed introduces a considerable shift in meaning regarding the word "swan". Metaphysical statements are limited by the discoveries of science and the ability of the speculative system to grasp the whole in a comprehensive, non-reductive and arguable way. Of course, an advance in these only calls for better mental objects, and does not entail the discovery of any novel sensate object. § 3 Cognitive Limitations. α. The activity of science is conceptual in a formal sense. Valid scientific knowledge stands between the knower and the known. Thanks to theory & testing propositions of fact come into existence. This production leads to a complex hierarchical network of scientific propositions with a central core ; the current scientific paradigm. β. Immanent metaphysics cannot be eliminated from the background of argumentation and experimentation. But its mode of cognitive activity is creative, not formal or critical. Immanent metaphysics (using hyperconcepts) brings science to greater unity, inspires it to pursue the production of valid (significant) scientific knowledge and invents a possible panoramic view of the world. γ. Transcendent metaphysics is altogether a different matter. Here an ultimate mind is posited, one able to directly know the absolute in its absoluteness. This unveils the world-ground of the world-system as apprehended by an ultimate mind of Clear Light*, namely the mind of God*. Science and metaphysics do operate in another mode of cognition. Formal and critical thought apprehend their objects as possessed by an empirical ego. The latter is not a substantial entity, nor are the objects of science in any way substantial (although they do tend towards essentialism). The propositions of science merely reflect a truth-for-the-time-being, and so cannot have any definitive pretence whatsoever. Being conventional knowledge, they aim to solve problems to enhance the functional efficiency whilst dealing with objects. The ultimate nature of these objects is not under investigation. In that sense, science should always entertain a high dose of humility, not stepping outside the domain of appearances. Contrary to this, creative thought apprehends an ontic self trying the grasp the totality substantially. Here thought seeks a self-sufficient ground and cannot find any ! The tendency of conventional knowledge to reify is actualized, leading to the apprehension of an underlying reality behind the mental & sensate objects of formal & critical thought. Lastly, while selfless nondual cognition does away with this substantializing approach, discovering the impermanence of all possible objects of thought, it does lead to a direct experience of the ultimate truth of all possible phenomena, namely their impermanence and interconnectedness. This ineffable experience, which cannot be conceptualized, is nevertheless very definitive in a non-conceptual way, leading up to the mind of Clear Light* apprehending the absolute nature of all phenomena. 1.3 Transcendent Metaphysics. While immanent metaphysics, by positing a series of limit-concepts to define the so-called "periphery" of the world, stays within its confines, critical transcendent metaphysics identifies this endeavour as rather artificial. How can the world have a periphery ? If the world is all there is, then there is no "outside" of the world. The Platonic division, so cherished by classical transcendent metaphysics, between a finite, derived world of becoming and an infinite, primordial world of being is devoid of sense. Is this not more based on cognitive limitations than on ontological divisions ? The world, insofar as conceptual rationality is concerned, is indeed quasi-finite (i.e. limited). So how can an actual infinity exist as part of the world ? But in terms of nondual cognition, the world-ground is infinite. So the distinction is epistemic, i.e. rooted in the way the subject of experience cognizes the objects it possesses. Moreover, conventional knowledge posits a world of seemingly independent objects, and only in this context has "periphery" any actual meaning. Realizing, by way of ultimate logic, no inherently separate entities exist does immediately away with any fixed notion of "outer" and "inner", for both are interdependent and so arising simultaneously. Viewing objects conventionally, they are limited (quasi-finite). Viewing the same objects ultimately, they are unlimited (infinite) ... Substantalizing the distinction brings about the apory between an inherently existing finite world and an inherently existing infinite transcendent self-sufficient ground "outside" the world. To ask how the world looks like when nobody is apprehending it cannot possibly be known, for object and subject also arise or coexist together. Conventional knowledge and its conceptual rationality cannot move further than designating a limited world and a series of limit-concepts like designer, conserver and the mind of Clear Light*. Suppose it imputes an Author or Creator, then it moves beyond the possibilities of conceptual reason. Non-conceptual nondual cognition directly experiences the world-ground as infinite and inseparable from the mind of Clear Light*. It also prehends the ultimate mind of God*. So from the point of view of conceptuality and its immanent approach, the world-ground is transcendent and infinite and so is its (ultimate) apprehension or prehension of it. Insofar as nondual cognition and its transcendence is concerned, conceptuality is immanent and finite and so is its (conventional) designation of the world. In terms of nondual cognition, the ground of the world is infinite, but the exceptional direct experience on which this is based is ineffable. If we limit ourselves to conventional and conceptual knowledge -shared by most-, considering this to be the norm, then we say the world is finite, for the common experience on which this is based can be articulated both by science and immanent metaphysics. But the latter are, although valid, mistaken, for the ultimate nature of the world, its ground, is infinite and beginningless. Indeed, conceptuality conceals the ultimate nature of phenomena, and if it tries to grasp this absolute without the benefits of ultimate logic, this ultimate will be defined as inherently existing, i.e. as independent and separate (self-powered from its own side). Then the world-ground has been reified. Traditional transcendent metaphysics, defined by Platonic or Peripatetic ontologies, posits a supreme substance "outside" the world-order. Pre-existing this unchanging, permanent, static supersubstance is the Creator-God fashioning the world "ex nihilo". Critical transcendent metaphysics introduces the transcendent, absolute, ultimate nature of all phenomena as (a) the absence of substantiality, (b) an infinite number of material & informational possibilities, virtualities & potentialities manifesting as finite actual occasions prehended by (c) the absolute or ultimate mind (of God*). And these non-temporal formative elements are themselves not concrete actual occasions. The world-system is then both potentiality (the world-ground of pure possibilities empty of substantiality) and actuality (the world as interdependent phenomena), both mere possibility and actual occasion, both world-as-potentiality and world-as-actuality. Of course, this difference is merely epistemic, i.e. depending on the mode of cognition with which the world-system is apprehended. Valid conventional knowledge apprehends phenomena as interdependent but -given scientific methodology- reifies them. Invalid conventional knowledge posits objects which cannot be validated by science. These too are grasped as existing from their own side, possessing their properties inherently. Here the degree of delusion of truth-concealment is optimal. To simultaneously grasp the world-system as, on the one hand, conventional, limited (quasi-finite) and interdependent and, on the other hand, as ultimate, infinite and empty of inherent existence, is apprehending it as it is, i.e. in its suchness/thatness. This is a bewildering paradox for reason and an enlightened Divine phenomenon designated by the mind of Clear Light*. The direct experience of this can only be prehended by power of nondual cognition ... and remains ineffable. A. Jumping Beyond Limit-Concepts. Conventional knowledge is always conceptual. It cannot move beyond. But concepts are deceptive. While valid conventional knowledge correctly identifies efficient operations, it nevertheless tends to grasp the properties of mental and sensate objects as subsisting in its objects. They are then deemed independent & separate from other objects. The universal interdependence of all phenomena is not clearly seen, if at all. So conventionality, devoid of the fruits of ultimate analysis (uncovering the non-substantiality or process-base of all possible phenomena), leads to the illusion concealing their ultimate truth, namely the absence of inherent existence. This illusion is the result of mental obscuration or ignorance. This ignorance is the root-cause of suffering. Ultimate knowledge is always non-conceptual and so ineffable. Although a datum of direct experience, it cannot be cast into the mould of conceptual object/subject relationships. It cannot undo the un-saying of its prehensions. Ultimate knowledge no longer grasps at objects as autarchic, but simultaneously observes their interdependence and lack of substantiality. This is called the "prehension" of the ultimate truth, the union of bliss & emptiness, of compassion & wisdom, of dependent-arising and the lack of self-power. Mental obscurations and epistemological transgressions always walk hand in hand. These lead to ontological transgressions, the mistaken identification of entities as possessing their characteristics from their own side, i.e. without being other-powered. These wrong views on entities build transgressive metaphysics. By identifying the correct object of negation, namely inherent existence, one deconstructs the objects of the mind and remains aware of the margin to be drawn next to the ongoing stream of conventionalities. In this margin, the false exits are identified as reifications, annihilating the disruptive influence on the mindstream. Then one may accept the functional ongoingness of conventional reality as apprehended by conceptuality while simultaneously prehend their fundamental lack of inherent existence, i.e. directly experience or "see" their being empty of own-self or own-nature in the light of them being full of otherness. § 1 Epistemological Transgressions. α. To grasp at sensate & mental objects in terms of valid empirico-formal propositions and valid speculative statements always implies a certain amount of reification. α.1 Epistemology (together with ethics & aesthetics), decrees rules one cannot deny without using them. These are transcendental and so critical concepts. This critical system of knowledge production is not grounded in anything. It is pre-ontological and pre-scientific (but not pre-logical). Transgressions happen when the objective & subjective conditions of the game of true knowing are rooted in a reified, self-sufficient, substantial (essential) ground before knowledge, in a "being" preceding "knowing". There is no epistemology without object (idealism) or without subject (realism). Both ideas of reason regulate and operate two interests in truth, one focused on correspondence and the other on consensus. α.2 Coarse, subtle and very subtle obscurations endure as long as, using substantial instantiation, self-power or essence is attributed to objects. Even the conceptual structure in which conceptuality unfolds (like space, time & the categorial schemes of normative philosophy) should also be viewed as not existing on its own. Lastly, lack of inherent existence or emptiness is merely a property of objects, and so not an object on its own. This emptiness of emptiness is only realized with great difficulty. Hence, as long as there is reified conceptuality, there is mental obscuration and so suffering due to the ensuing supposed isolation of objects and/or subjects. Positing emptiness as a substance is indeed destroying the antidote to ignorance. β. To reify the object of knowledge is to consider any sensate thing as existing from its own side, independent & separate. The identification or imputation of any sensate object is always dependent of a cognitive act from the side of the conceptual mind. This happens because of a failed attempt by this mind to stabilize properties as inhering, which, after in-depth ultimate analysis, are merely found to be changing or impermanent (although interconnected). β.1 Given object A, one may ask : is this a compound or not, can this be further subdivided or not ? As all objects of the conventional mind are compounds, the same question may be posed regarding the various subdivisions etc. In this way, nothing final is found. A regression ensues. β.2 If the regression is stopped ad hoc, then a hardly convincing ontological (reified) self-sufficient ground is designated. It cannot pass the test of ultimate analysis and so this hippopotamus cannot be found. γ. To reify the subject of knowledge is to understand the mind and its empirical ego as existing from its own side. But if we ask where the mind or the ego is, nothing is found except sensate objects, volitions, emotions, thoughts and moments of consciousness. These are found to be impermanent and hence no inhering, self-sufficient stability can be traced. Again the reification fails and the empirical ego (with its sense of permanent identity) as well as the ontic self (designating itself as a mental substance) cannot -under analysis- be found. δ. Due to the power of these mental obscurations, scientific propositions or even some speculative statements seem to be correct. Validity or truth-for-the-time-being is confused with absolute truth. Because of reification, sensate & mental objects merely appear as independent and permanent. Believing our own imputations, we create a reality/ideality of our own making and then blame the illusion not to remain ! Thinking things are independent, we temporarily make them so. But because they are ultimately impermanent, we are bound to suffer from our own mistakes. ε. Even the formative elements of the world-system (the world-ground composed of primordial matter, primordial information and the transcendent aspect of God*) are not permanent. God*'s impermanence does however not preclude His continuity as symmetry-transformation. ε.1 Empty of themselves, they are full of an impermanent material & informational pure possibilities and an ongoing process of Divine evaluation and adjusting. These properties do not act as pre-existing substances inhering in the "primordial", but pre-exist as possessed by the virtual togetherness of the propensities of the world-ground. ε.2 The emptiness of emptiness is precisely this : the lack of inherent existence is not a superobject, nor an underlying self-sufficient ground. The world does have a ground or fundamental stratum, but this too is empty of itself and so in no way substantial. It is sufficient, but not self-sufficient. The first step is a wrong view. Start with that, and end in confusion, ignorance, obscuration & distraction. Reification is the great culprit. This is the ultimate epistemological mistake. Once identified, one needs to return and return to the ultimate logic of its undoing, for the mind entertains a strong habit of grasping at inhering properties. § 2 Ontological Transgressions. α. Reifying the object of knowledge at the level of ontology, i.e. considering the absolute touchstone of that what is as existing on its own and separate from the subject of knowledge, makes it easy to argue realism, the ontological view accepting objects exist from their own side as part of a real world "out there". The most fashionable of these ontologies, materialism or physicalism, adds all objects are fundamentally nothing more than physical things, i.e. compounded material aggregates composed of particles, waves, fields & forces and their relationships. Although non-material stuff like information or consciousness may be accepted (as in emergentism), reductionism brings them back to matter. This is the case of epistemologies articulating how the object constitutes the subject. Classical examples : Aristotelism, empirism, materialism, (logical) positivism & physicalism. α.1 Consider any macroscopic material object. Composed of a large number of molecules made out of atoms, the influence of gravity is paramount and so this cancels the effects of quantum uncertainty (cf. the principle of indeterminateness of Heisenberg operating the atomic & subatomic levels). On this macrolevel, position and momentum behave in a conventional, common sense, "classical" or Newtonian way. The object is not between everywhere and nowhere. But this continuity & definiteness are illusionary. Dividing the object into smaller and smaller pieces will eliminate the effect of gravity and eventually, at the atomic and subatomic levels, the constituent parts are only probability-waves yielding specific quantities when observed by an observer. At this point, the conventional, physical object/subject relationship breaks up, and the separateness, definiteness & locality of objectivity is gone. α.2 Only when a subject of experience interacts with the probability-wave does it collapse, turning an infinite number of possibilities into a single one. As all macroscopic objects are erected upon their atomic foundation, conventional realism is merely apparent and the difference between classical mechanics and quantum mechanics depends on temporal & spatial scaling. On the fundamental level, object and subject cannot be defined as independent, separate and local. The deep-structure of matter calls for the intimate, continuous interaction between the observer and the observed, between the knower and the known. α.3  Lacking objective mooring, i.e. without a definiteness independent of and separate from a subject of experience, the conceptual mind has no way to grasp, impute or possess its object. Like waves on water, mental elaborations cease. This is the beginning of the purification of the conceptual mind, ending in the exhaustive, thorough arrest of all substantial instantiations ; the annihilation of reification. α.4  Considering the apparent solidity of macroscopic objects, realize atoms consist of space without mass. The atomic core (of neutrons and protons) is good for 99.9% of the atomic mass, but it occupies as much space as a grain of rice hanging in the centre of a football station. The reason why macroscopic objects appear as continuous (as solids, liquids or gases) is the electro-magnetic bonds between the constituent atoms, not because of the presence of "solid" mass. To build relationships is like bonding togetherness. α.5  Consider the apparent immortality of electrons, photons & neutrinos seemingly left undisturbed. As all particles interact with other particles, this absence of disturbance is relative. Not a single material thing part of conventional reality subsists forever, for all phenomena arise, abide, cease & reemerge. Interconnected (organic) impermanence & absence of inherent existence are fundamental to all possible actual occasions. Even the world-ground itself, although not nothing, lacks own-nature and is therefore without properties inherently existing in it. The primordial domains are the properties of this virtual world-ground. The virtual is the father of the concrete. The possible is the mother of the actual. β. Reifying the subject of knowledge at the level of ontology, i.e. considering the subject or community of sign-interpreters as existing on their own ontic (noetic) plane and separate from the object of knowledge, leads one to argue idealism, the ontological view the object is constituted by the subject, the community of subjects and/or their mental operations (like arguing and establishing a consensus). Although material objects are accepted, they are merely the reflection of non-material, mental activities. This is the case of the subject constituting the object (cf. Platonism, rationalism, psychologism, transcendental idealism, existentialism, etc.). γ. Realism reifies matter. Idealism reifies the mind. The former reification turns the conventional world into a subsisting materiality, the latter brings in a supermind transcending the world, originally creating it and sustaining it. Realism reduces the world to the order of the actual world. Idealism deems the latter to be the creative result of an original, primordial supermind eternally existing from its own side. The second step is a wrong intent. Once a wrong view is realized, either in terms of a reified object of knowledge or a reified subject of knowledge (in epistemology), the reification (or substantialization) needs to be reified itself (in ontology). Finally, substance is essentialized. The seal is sealed. This by letting the subject establish the object (based on an epistemology without object) or by inviting the object to establish the subject (based on an theory of knowledge without subject). The solution is to never grasp the object or the subject as permanent. § 3 Transgressive Metaphysics. Building complete worldviews on the basis of epistemological & ontological transgressions leads to static, uncompromising, unworkable, inefficient and unscientific approaches to the major questions of life : Why something ? What about the universe, life & consciousness ? β. A metaphysics of idealism fixates a supermind and attributes an inherent existence to it. It thus turns the activities of the mind into either a perfect, ideal "true" reflection of this supermind, or into an imperfect approximation of it. Non-physicality is pivotal. The distinction is between an absolute mind and a totally useless, imperfect and thus rejected physical state of affairs. Rather, one should make clear facts are not exhaustively intra-mental. The ultimate distinction is between, on the one hand, impermanent mental states and moments of consciousness and, on the other hand, the imposed (projected, imputed, attributed) inherently existing properties of the (super)mind. γ. A metaphysics of realism posits a real, objective, external & substantial world "out there". Physicality plays a crucial role. Despite possible emergent properties, the role of physical, molecular, atomic & subatomic events is emphasized, and complex phenomena are -if possible- reduced to their material parts. Realistic activities of the mind correspond with the Real. The distinction is between an absolute objectivity stimulating a receptive cognitive apparatus, and thus between what is Real and what is merely subjective or unreal. Rather, the difference between perception and sensation should be reminded, as well as the constituting activity of the subject. In the co-relative activity of producing conventionality described by the valid empirico-formal propositions (of science), the organizing & intending work of the Ideal is at least as important as the Real. δ. Metaphysical idealism turning religious will invent an omnipresent, omniscient & omnipotent supermind. These qualities inhere in it and are absolute. Hence, this supermind must be a superbeing, a Creator-God. As the subject constitutes (imputes) the object, this God creates the world "out of nothing", i.e. as an act of His Free Will. This worldview fails to understand such a supermind cannot be found and if it would, it could not create, produce, cause or effectuate anything. ε. Metaphysical realism turning materialist will invent an objective, physical world producing all possible phenomena. The latter are physical. The non-physical is rejected. If accepted, as emergent properties, then the non-physical is caused by the physical (downward causation is deemed absent). Materialism cannot be articulated without a subject of knowledge. Moreover, perceptions are not sensations. Finally, the non-physical interacts with the physical, and both matter, information & consciousness are aspects of every single actual occasion. The third step is a wrong object. Having reified the conditions of knowing and secured the justificators (the ideas of the Real and the Ideal), these two objects are totalized. This results in either a static, substantial, eternal mundus or gives birth to the idea nothing really exists (while all things are merely empty of themselves, not of something). Metaphysical transgression is not primarily the polarization of what exists in the vertical and horizontal vectors of the mundus, but follows from the need for reification. Finding a ground is not enough. Not even a sufficient ground suffices. Indeed, a self-sufficient ground is designated. In this view, the world has to be finite in an inhering sense ! But if the world-ground is not a self-sufficient ground, nor an actual occasion, it must be a process, a dependent-arising, a coherent symphony of abstract possibilities. Then world and world-ground are not different, but distinct entities ; the former actual, the latter abstract. § 4 Deconstruction & the Margin. α. Deconstruction does not destroy its object, but merely its reification. Weaponed with ultimate logic, all possible inflexible, static, solid, eternal and substantial objects are investigated and found not to exist as they appear. Found to be impermanent, they are non-substantial. Eliminating their tendency not to move, pushing away their inertia, is to realize the absence of own-nature in each of them. They do not exist as separate and independent objects, but merely as interdependent happenings or display of actual occasions. β. Radical postmodernism (as the end of the "grand stories") remained dependent of modernism. As modernism lacked internationalism and multi-culturalism (being mostly Western), moderate postmodernism integrated the global perspective. Building a deconstructed worldview is the task of hypermodernism, multiplying a global perspective with ecological & social sustainability. γ. The margin is an imagined space defined by a dividing-line drawn parallel to any text. This space is used to mark all reified concepts present. They are identified and marked. These are the transcendent signifiers one cannot avoid but -to satisfy parsimony- must keep to the bare minimum. Two are identified : the mind of Clear Light* and God*. δ. Deconstruction is not a passive analysis post factum, but happens in the heat of the action. Like a swimmer or a singer, complex forms emerge in and by the action itself, not by anything "from the side". The moments constituting the stream are never identical and never return. All is constantly permanently lost. ∫ Finding not a single substance, the wise dine & wine on wisdom. Avoiding three wrong steps, namely wrong theory of knowledge, wrong ground and wrong totalization, deconstruction focuses on all possible reified objects. Both on the side of the subject of knowledge, as on the side of the object of knowledge, the solidification, isolation, fixation and substantialization of the Real or the Ideal are identified. At some point, when this has happened repeatedly, the mind stops to impute independent & separate existence and stops grasping at the supposed own-nature of things. The "seal of emptiness" is placed on all sensate & mental objects (cf. the "mahâmudrâ"). As a result, objects no longer appear as they do, but unveil their other-power, i.e. the fact they merely exist because of determinations and conditions outside themselves. They are something, i.e. not nothing, because they are functionally related. Without this efficient bonds, they do not exist, and if they appear to exist from their own side, the mind is necessarily deluded & obscured. B. Conceptuality & Non-Conceptuality. When the mind cognizes, it grasps at an object and possesses it. Nearly simultaneous with this, to further identify it, the mind conceptualizes and so imputes a concept, name or label. Between the act of cognizing and the moment of conceptualization, a small gap occurs. Between two moments of conceptualization, another isthmus, "bardo" or interval is at hand. Cognizing and conceptualizing are not simultaneous. Grasping the object and naming it are indeed two consecutive steps. This can clearly be felt in ante-rationality, in particular mythical and pre-rational thought. In these early modes of cognition, the concept is not stable. In mythical thinking it is psychomorphic, taking on the shape of subjective experiences. In pre-rational thought, it has a certain kind of stability, but still vanishes quickly due to a plastic semantic field. While proto-rationality works with mature, stable concepts, they are not abstract but concrete and so are defined by the context in which they appear. This gives them a semantic multiplicity, a fluidity prone to confusion. Ancient Egypt and pre-Classical Greece feature these kinds of opaque conceptualizations. Clear meaning can only be established by lengthy comparisons and minute studies of all available contexts. Even then, precise meaning can only be suggested, not inferred. The empirical subject knows the momentary field of consciousness as (a) the direct, experiential, phenomenological horizon with its central ego cogitans, (b) conscious contents and ongoing fluctuations, (c) together forming the mindstream consisting of consecutive moments of sentient activity, mental activities organized and ruled by the mental operators associated with the various modes of cognition. Now thanks to the abstract concept, all mental operations are boosted by the application of context-free relations between stable concepts, leading to conceptual elaborations and the correct & valid use of conventional knowledge. The noetic aspect of the "Greek miracle" is precisely this comprehensive use of abstraction, leading up to the concept-realism of Plato and Aristotle. The latter is an exaggeration unwarranted by critical reason. Kant did not accept the non-conceptual (cf. his rejection of "intellectual perception"). He considered this not to be given to everyone and so too exceptional to be part of a criticism of pure reason. The notion nonduality is a mode of cognition calling for a cognitive act (featuring object & subject) is based on the direct experience born out of study, reflection and meditation on ultimate truth. With enough effort, this is the share of every human being wishing to end ignorance on the most fundamental level possible. § 1 Conceptual Thought. α. When, from specific instances, a general idea is inferred or derived, this abstract is called "a concept". With the "Greek miracle", the ante-rational stage of cognition, with its strong pragmatic mental closure, had been superseded. Formal rationality imposed both contents & form. β. Ante-rational concepts are either a-conceptual, pre-conceptual or concrete. In myth, they are psychomorphic and make no distinction between inner & outer, obscuring the distinction between sensate & mental objects. In pre-rationality, concepts are unstable and therefore mere pre-concepts. In proto-rationality they are stable but concrete, defined by context only. In all cases, a confused type of cognition ensues. There is no stable mental form, except in the immediate coordination of movements. Symbols only persists for brief moments or as part of designated (and unstable) contexts. Signals & icons persist (especially in the earliest two modes of cognition). With the coming of rationality, ante-rationality is pushed in the unconscious. The more a culture is refined, the less instinct & emotion need to be subdued. The outstanding feature of Western rational culture is to dominate instinct & emotion for "a good reason". This is the origin of pettiness & silliness. γ. Conceptuality overlays or superimposes a general name, label or symbol on sensate and/or mental identifications of spatiotemporal variations in a set of actual occasions (caused by a finite number of sensuous impulses and/or mental cogitations). This involves a logical mistake, for how to justify the leap from a finite number of concrete sensuous and/or mental elements -leading up to a pre-conceptual identification- to an infinite number of such elements in the three times as defined by an abstract concept ? Both what is identified, the identifier and the process of identification are impermanent and so prone to change. δ. The distinction between the pre-conceptual apprehension of sensuous impulses projected on the neocortex and the moment of conceptual overlay is crucial to understand how the name or label associated with what has been identified differs from the latter. These pre-conceptual sensate objects, indeed resulting from the earliest moments of interpretation, are nevertheless not yet concepts, i.e. abstractions, generalizations, static names or labels. And they are certainly not the reification of such concepts as in concept-realism, attributing own-nature or substantial sense to concepts. ε. Note these distinctions. The mechanism of the conceptual process involving sensate objects involves three phases : in the first, the sensate objects (projected by the thalamus on the neocortex) are pre-conceptual and identified by way of a variety of actual occasions present in the direct, phenomenological field of the observer during the act of (total) observation ; in the second, this concrete information is generalized and so named and labelled. Here the conceptual mental operation is at hand, one identifying a universal and its instances ! In the third, the subject of knowledge apprehends the general concept or name, superimposing it on all subsequent manifestations of a similar sensuous stream of actual occasions. In all forms of pre-critical rationality, the third step leads to reification, positing a substance existing from its own side, keeping its own inhering properties, separate and independent from others. Conceptual thought operates abstract concepts and brings these together in opinions, notions, hypothesis, theories & speculations. Thanks to generalization, the cognitive act is liberated from context. Eventually, the structure of conceptual thinking itself can be apprehended, leading to a logic devoid of contents, i.e. formal. Despite the fact classical formal logic is not the only possible logic, concept-realism is thoroughly dedicated to it. Of all the basic principles, non-contradiction rules supreme. The Newtonian world also ran in absolute, linear terms. But this proved to be a good approximation only. Indeed, the fundamental nature of physical objects involves quantum logic defying strict non-paradoxality. And most living systems, including the human brain, has an architecture, a software executing a chaotic logic. So although conceptual thought is crucial to escape context & content, it is not an absolute tool, but merely a relative waymark to keep track of the conventional, common-sense worldview. This sobriety gives the power to climb the mountain of meta-rationality, if such an undertaking is deemed necessary at all. Like ante-rationality, rationality has mental closure. Moreover, because of the limit-concepts of immanent metaphysics, the creative mode of cognition is not necessary to solve the problems of conceptualization (namely reification). So the leap enabling us to face absolute truth is an act of freedom. From the side of reason, it can be nothing else but a leap into the absurd ... So be it ! § 2 Ante-rational Regressions. α. The realization of rationality does not guarantee the absence of unwanted returns or regressions to the earlier stage of cognition. It is crucial to grasp ante-rationality, although made unconscious, is still prevalent in instinctual and emotional matters, i.e. those areas where context plays a important role. Signals and icons are defined by our ante-rational mentality, given shape by libidinal, tribal & imitative foci of consciousness, by an antique ego fed by the memories of the earliest experiences of conscious life as a human being. β. In the chaotic sea of ante-rational thought lurks the Leviathan of irrationality. The absence of its reemergence needs to be checked again and again. If this effort is unrelenting, regressions can be avoided. But due to habit, the mind settles down and breeds bad defences. γ. Ante-rationality, because it has mental closure, can fabricate a number of fantastic stories and implement the terror of concrete words. Without rationality, a single deity turns into billions ; each with its own silly walk or Moon dance. With rationality, formal and critical, the substantial God is unmasked and the God* of Process dawns. Aware of the presence of instincts and emotions, the integrated rational mind, formal & critical, no longer subdues nor renders unconscious the various processes stemming from an ante-rational approach of the world. Training these eventually leads to emotional intelligence as well as to a gut-feeling assisting the proper functioning of the mind. Of course, at the end of the day, in this mode of cognition, only reason judges. But because even the critical mind cannot eliminate the need to reify, such judgments may be mistaken. Only absolute truth brings to light the fundamental true nature of all possible phenomena. Because of this reifying tendency, reason cannot completely compensate for instinct & emotion. Only wisdom realizing emptiness can. § 3 Meta-rational Transgressions. α. The complexification of cognition moves beyond rationality. Creative and nondual thought make way for cognitive horizons far beyond the capacities of the mind working out in the rational stage of cognition. To limit the mind to what seems to be given to the majority, is to make the infinite serve the finite ; an absurdity. Both define their own domain, the finite world finding its infinite potential in its own world-ground. The intellect crowns reason. Where reason apprehends, intellect prehends. Abstraction has to be paid by lack of inventivity, creativity & novelty. Situated between ante-rationality and meta-rationality, rationality represents the Middle Way between instinct and intuition. Without the latter, rationality lacks the ability to create novelty. With too much of this, cognitive activity is either confused or lacking purity (i.e. a perspective on the end of reification). γ. Creative thought prepares intellectual prehension by serving as a purgation for the subtle forms of reification. Totalizing and the reification of a totalizing object need to be distinguished. Creative thought first allows reification to explode. Positing the ontic self in its "mandala" it then annihilates the reified totality. This is like ending ignorance with one single blow. δ. Insofar as creative thought posits an ontic self, its creativity is sullied, leading to brontosauric statements. The latter are not devoid of dramatic exaggeration and have no other use than to totalize the creative object of knowledge. They do reflect the power of novelty and inventivity, the ornaments of consciousness. They evidence the establishment of a higher-order mental level, albeit one covered by the fixating imposition of an ontic self possessing itself and its properties from its own side, inherently, imputed as an eternal self-powered, self-identical & nondependent mental substance. It goes without saying that to the ante-rational layer of mind, such megalomanic display is very appealing, stimulating the re-emergence of instincts & emotions, signified by signals & icons. However, it merely serves -by way of paradoxical intention- the end of reification. The ontic self makes way for the transparant self, ending in selflessness. ε. The higher ontic self is not a strong object of negation, but its emptiness is. This self needs to be thoroughly identified before it can be emptied of itself, thus not leaving naught, but the very subtle transparant self-reflection present in the cognitive act. This "prise de conscience" is a totalizing awareness of consciousness as object and so if not reified, the portal to the selfless self-awareness of nonduality. The creative mode merely prepares the mind, refining it to the point it apprehends the totality of its sensate and coarse mental objects as empty of itself, i.e. as a process without own-nature ("svabhâva"), with "no self" ("anâtman"). This means they are not themselves, neither are they not something ! Avoid both extremes of eternalism and nihilism. ζ. The reification of the higher self, designating an ontic, substantial self or subjective own-nature, can also be a stepping-stone to the reification of the transcendent object itself. ζ.1 When emptiness is designated as a ground not to be emptied of itself, absolute truth is raised to become a different ontological entity, plane or level, giving birth to the idea of the absolute being high up (Heaven) versus the relative being down low (Earth). ζ.2 For emptiness to be empty of itself, the absolute must merely be a property of every possible actual occasion, existing conventionally in every possible apprehension of sensate & mental objects. When Two Truths become a single Truth, how can the shepherd mind his flock ? Ante-rationality needs reason to solve its problems, but reason cannot silence instinct & emotion. While for a rational human being reason has the final say, the decisions of reason lack the capacity to encompass the various semantic connotations invoked by instinct & emotion. Rationally, these signals & icons seem outlandish and irrelevant, but as far as these ante-rational mental imprints are concerned, reason speaks a foreign language and so imposes a misunderstood rule. Rational analysis cannot integrate ante-rational information. Another false path is to replace reason by meta-rationality. As if the latter is not imputed on the basis of conceptual stability ! To make the choice to totalize, ontologize & then desubstantialize is the prerogative of free study, in particular metaphysical studies. Meta-rationality does not yield a superobject nor a supersubject, but merely a panoramic perspective on the process of the mundus and a philosophical reflection on the transcendent object based on its (direct) prehension. As soon as a speculative discourse invoking the absolute becomes an eulogy of the "thing of things", possibly inventing a theo-ontology, it transgresses the "ring-pass-not" of critical thought. The transcendent object being empty cannot act as nondependent or ontologically different from the relative. § 4 Direct Experience & Cognitive Nonduality. α. To introduce nondual thought, reason & contemplative experience have to be distinguished. Ultimate logic only eliminates the reification of the concept. It does not end conceptuality, for the latter belongs to the valid processes of the conventional world ruled by relative truth ; valid but mistaken. Compassion and meditation on the emptiness of all possible concepts, involving a deep reconditioning of the mindstream, bring about the end of the reification of concepts. This is the purification of the conceptual mind. Concepts are not the problem, their reification is. Prehension no longer grasps, but finds objects as they are. β. A direct introduction to and discovery of the natural light or the mind of Clear Light*, does not cause something, but rather, as a perfect mirror, reflects, when secondary causes manifest, the movements of energy and the processes appearing in it. The natural light of the mind cannot be observed, for it is the very thing observing, perceiving only the suchness/thatness of the actual occasions without conceptual interpretation. This light is a potential, an open space of possibilities. It is the nature of the mind as it is by itself, its witnessing clarity. γ. Nondual thought is not discursive, nor conceptual. In other words, the apex of thought is non-verbal. Myth, the beginning of cognition, is also non-verbal, but opaque & non-reflective (and, mutatis mutandis, non-reflexive). Nondual thought, the end of cognition, on the contrary, is highly reflective (dynamical, differential, energetic) and sublimely reflexive, with the absolute subject prehending the absolute object. But this is no longer "inner" knowledge, nor even arguable (immanent) metaphysics, for it lacks all forms of conceptual duality and cannot be symbolized, except in sublime poetry. Perhaps as a direct, self-liberating, self-transforming, wordless, instantaneous awareness of the unlimited wholeness of which one's nature of mind is part. If this highest, nondual awareness is called "wisdom", then wisdom transcends the concept, be it concrete, formal, critical or creative. δ. Because the nature of mind is ultimate reflectivity & reflexivity (the absolute I knowing the absolute Other), the original mind of Clear Light* is thus (a) self-clarity, like a Sun allowing itself to be seen or as a lamp in a dark room lighting up the room but also itself, (b) primordial purity, or the absence of conceptual elaboration, (c) spontaneous perfection, self-liberating all reifying flux within consciousness, (d) unobscured self-reflexion, as in a polished mirror, transparency in variety, like a rainbow or as water taking on the colour of the glass and, as space accepting all objects in it, (e) impartiality. ε. Although without conceptual object, this subjectivity is "aware". It is the "awareness of awareness", self-settled, wordless, open and reached by a pathless path leading to a pathless land. It is clarity, but without differentiating anything. The fundamental nature of the mind is not part of consciousness. This nature is simply always present to and aware of the state of absolute absoluteness it finds itself constantly in. This is an absolute & blissful selflessness only aware of its absolute object, the lack of substance in all things, itself included. This is an absolute experience of duality, and therefore a nondual dual-union, non-conceptual and so paradoxical. Although not a consciousness, it is a mode of cognition and so definable in terms of the transcendental duality, but then in an absolute sense. But in the case of nondual cognition, a special dual-union pertains. Nondual awareness is not induced by any immediate prior condition. It has no cause. It cannot be determined by a previous moment of consciousness. It is a self-settled, wordless, non-conceptual, open awareness, without a place ("epi") on which a subject might stand ("histâmi") and so pre-epistemological. These ideas are not the result of any reasoning, but poetical elucidations. ζ. This original nature of mind is absolute. So it will, if not deconstructed, act as a transcendent signifier. Hence the distinction between immanent & transcendent metaphysics. Despite non-conceptuality, direct experience apprehends this open, clear awareness or very subtle mind of Clear Light* present in nondual cognition as a direct encounter with the something not found among sensate or mental objects, with i.e. absolute reality nakedly, purely & primordially united with absolute ideality. η. The display of phenomena arising out of the empty all-ground or world-ground features (besides primordial matter and primordial information) a cognizing luminosity, presenting (a) an original nature of mind and (b) a primordial enlightenment-being ("Âdi-Buddha") or God* (not to be confused with the self-sufficient ground of classical ontology). η.1 The non-separation between the absolute all-ground and the original nature of mind is the experiential fruit of directly experiencing this ultimate nature. η.2 While this experience is ineffable, mystics never stop talking about it. When they do they are not scientists, nor philosophers but merely poets. When pursuing absolute truth, conventionality is not considered a negative, like something imperfect or useless. Why ? Because there is nothing outside the world-system. The world-system is all there is. Its infinite, absolute ground is not a self-sufficient ground, but a dependent arising empty of itself, but full of an abstract "something" shaping the possibility of all possible concrete actual occasions. Together, this primordial consciousness or mind of Clear Light* (of God* and of all other beginningless mindstreams), pristine information and virtual quantum plasma, make out the set of formative abstracts. They represent the world insofar as it is merely potential, virtual, possible. Although an infinite truth transcending the relative, finite world, it is nevertheless not a different kind of being, not another "class" of actual occasions. Hence, unmistaken absolute truth is revealed in every cognitive act and this simultaneously with its valid but mistaken relative appearance. The absolute exists conventionally. Not in a "higher" topologically distinct from the actual world, but precisely at the very, momentary instance when the actual world is observed. The absolute is always-with-the-world. § 5 The Epistemological Status of Nonduality. α. The experience of nonduality is a first person prehension of the nature of mind, recognizing its Clear Light*. This hidden & ineffable observation is "mystical" and cannot be described. This prehension by the absolute subject (the mind of enlightenment) of the absolute object (the lack of inherent existence in all phenomena) is the observation of its suchness/thatness, or momentary presence with nothing more. This is unmistaken, without obscurations, veils or concealments. β. The logic of the tetralemma ("catuskoti") offers the best conceptual approach of n onduality. This tool frees consciousness from all possible reifying conceptualizations, namely by negating all substantial views, introducing all phenomena as without inherent existence, eternal substance, absolute identity or immortal essence ; impermanent but not random. In logic, the particle "not" has no other function than to exclude a given affirmation. The tetralemma therefore excludes everything by exhaustively analyzing what emptiness is not : 1. it is not as it is (identity) : things are always connected with other things and if change by way of determinations & conditions is accepted, then all identity is impermanent and devoid of inherent existence, own-nature or substance ; 2. it is not as it is not (negation) : likewise, the negation of anything cannot be done without negating other things, making what is being negated interconnected and thus impermanent ; 3. it is not as it is and as it is not (mixture) : to say this clause has meaning is to utter a meaningless "flatus voci", except if differences in time, space & persons are introduced. In the latter case, the mixture is a new identity, and (1) applies ; 4. it is not beyond as it is and as it is not (included middle) : only if (1) & (2) cannot be clearly defined may this clause apply, but it is rejected as invalid. Denying the included middle implies the excluded middle. γ. Using the "reductio at absurdum", the tetralemma negates the four options given by formal logic. Accepting the first two is "nominal", and no valid path to liberation, for suffering is what is common to everything. Identity has to be renounced and its emptiness realized, i.e. conceptualizing the impermanence of everything results in the end of reifying conceptualization. Accepting the last two is "irrational", for in classical logic, non-contradiction & the principle of the excluded middle are necessary (although many-value logics do not accept the principle of the excluded middle). δ. By restriction ("nirodha"), each clause removes, dissolves, evacuates & drives calm the final obstructions of knowledge (cf. "jñeyâ-varana"). The result being a conceptual mind close to or approximating the nondual state. The tetralemma expresses the inapplicability of ordinary, nominal conceptual language to the absolute. The idea behind the tetralemma is to establish a view without concepts, i.e. employ logic to reach beyond logic. This can only be prepared, leading to the purification of the conceptual mind. Indeed, the "wisdom" of meditative equipoise cognizing emptiness is not induced by an inferential consciousness segueing into emptiness. The conceptual "operation" of the tetralemma is not a process by which conceptual thought is spontaneously transformed into the highest possible wisdom. ε. Conceptuality cannot be the cause of non-conceptuality. Ultimate logic proceeds to eliminate reification but does not and does not need to annihilate the concept. Hence, there is no conceptual "operation" establishing the nondual view, no path to the final step, the apex of cognition. ε.1 One needs to completely use up the fuel of the "fire" of reifying conceptual elaboration (this is "nirvâna"). So negating what must be negated, namely inherent existence, is the supreme antidote to cancel the poison of ignorance. ε.2 Only prolonged spiritual exercises (combining calm abiding or tranquility with insight or analysis) are able to properly prepare the mind to experience emptiness directly. This is not like propelling it into "seeing" emptiness, for non-conceptuality arises at the precise moment the highest, purest veil of the conceptual approximation of emptiness is pierced. The fabrication of suchness/thatness by applying the rules of ultimate logic is the ultimate preparation approximating "seeing" full-emptiness, the union of dependent-arising & emptiness. This preparation is however conceptual and so not yet nondual. No doubt advanced, it is not yet direct, seedless, without means, unfabricated. After having made the mind supple, conceptual preparations must be exhausted. A generic concept of emptiness is then realized. But this is not the same as unfabricated suchness/thatness, the direct, unmediated experience of the absolute nature of all possible phenomena. So epistemologically, the transcendent holds no conceptual truth-claim and has no conventional validity, but only ultimate validity (in terms of the act of prehension, itself beyond validation). It is not an object of science nor of immanent metaphysics. Neutral to both, it cannot enforce. There is no coercion in salvation. Nevertheless, by directly observing the ultimate nature of all things, thus entering the wisdom realizing emptiness, an unmistaken, non-conceptual experience is possible. In the teachings ("dharma") of the Buddha this experience is nothing less than awakening ("bodhi"), establishing the mind of enlightenment for the sake of all sentient being ("bodhicitta"), the unity of bliss (compassion, method) and emptiness (wisdom). Such an enlightened mind is omnipresent and omniscient (aware of past and present). Although superpowerful, it is not omnipotent. The mind of Clear Light* is valid because reality-as-such is prehended. Because it does not make things appear differently than they are, it is also unmistaken. C. Irrationality versus Poetic Sublimity. If nonduality cannot be conceptually appraised, it must be understood as a highly subjective experience. Relevant no doubt, it has no direct significance whatsoever. So is it irrational ? This would be the case if nonduality would eliminate the conceptual mind. But just as rationality does not eclipse ante-rationality, non-conceptuality does not preclude conceptuality. Awakening does not stop one from thinking in terms of conceptual relationships. Devoid of the reifying tendency so active in the rational mode of cognition, such a mind simultaneously prehends emptiness & fullness, absolute (ultimate) & relative (conventional). Precisely because nonduality is non-conceptual, it cannot argue and so through argument validate the experience of the ultimate. Therefore, as soon as one tries to argue nonduality, irrationality lurks. Apologetics are off. Only direct experience is at hand. This can be prepared, no doubt, but not a single correct preparation causes nonduality ! It can merely be pointed out, introduced or recognized. If not, nothing else can be done. Nondual experience impacts conceptual thinking and therefore proves its significance indirectly, namely in the behaviour of those in which such a profound state is fully realized. Indeed, great compassion or limitless charity is the activity of the mind of Clear Light*. Aware of the vastness of suffering, such a mind engages to alleviate the pervasive suffering present in conventional existence (or a life defined by the determinations & conditions of conventional knowledge). Hence, such a mind has a very powerful intent to end the suffering of all sentient beings and the unmistaken, realized & forceful potential to do so. § 1 Featuring Irrationality. α. Irrationality cognizes without the inclusion of rationality. Its spirit is not dampened by the diabolus in logica, non-contradiction. It either lacks universalia (as in ante-rationality) or does not appreciate the validity of concepts (as in invalid transcendent metaphysics) and so lacks the capacity to identify its mistakes. It has not yet arrived at the cognitive level introducing concepts (as in myth), is unable to establish a stable concept (as in pre-rationality), is bound to context (as in proto-rationality) or cherishes a dogmatic view held true for no good reason, as in blind faith and pre-critical forms of conventionality. β. The many forms of irrationalism all try to undermine reason, introducing absence of sense. In general, nonsense does not accept the power of logic to decide between valid and invalid, between true and false, between mistaken and unmistaken. Making use of logic to defend its dogma, as a form of apology, it mostly tries to seduce others into uncompromising salvic moves. ∫ The deceptions of irrationality may fool some for some time, but never succeed in bamboozling everybody all the time. γ. Like myth, nonduality is non-verbal. But while myth is a priori non-reflective and non-reflexive way, the ultimate mind is highly reflective and sublimely reflexive. Precisely because of this, the indirect influence of this mind is very powerful. When turned towards others without enforcing anything, triggering spontaneous attunement & metanoia, it identifies ultimate truth in every moment of its awakened mindstream. This is not scientific nor metaphysical, but existential in a poignant, instantaneous way. Spontaneously liberating all ignorance in every moment of the mindstream, suchness is complimented having its own index of truth. Possessing the ultimate clarity. Very subtle reification needs to be avoided, for the absolute is empty of itself ! The awakened mindstream prehends the absolute object. This is like the son jumping into the lap of his mother. δ. Irrationality always tries to limit & darken the rational mind. This disruptive activity is ongoing, for the imprints left by the ante-rational mind are powerful emotions & instincts. Come to its own, the mature rational mind cannot eliminate the latter. They provide the vital emotionality with which the desperate search for a self-sufficient ground is clothed. If coarse irrationality leads one to overt insanity, then subtle irrationality is the power of the grip clinging to substance. Very subtle irrationality is making the self-sufficient ground transcendent & eternal, the ultimate spiritual stabilization in self-contradiction. ε. Due to its coarse irrationality, the ante-rational mind becomes confused and stays in permanent, unresolved conflict. The rational mind mediates the contextual problems with abstract concepts and defines the finite world by way of tangential limit-concepts. Here, irrationality feeds on the tendency of the rational mind to reify. Substance-thinking being the subtle form of irrationality. Lastly, when the mind of Clear Light* is reified in terms of an absolute mind-substance (eternal soul) or an absolute object-substance (God), very subtle irrationality is introduced. ∫ Do organized religions hold a monopoly on very subtle irrationality ? Coarse irrationality is often associated with afflictive emotions and violent instincts. These can be identified with ease. Mental disorders like schizophrenia provide case-studies proving how those minds lack the ability to even take care of themselves in the most essential ways. In psychosis visual, auditive, tactile hallucinations occur. Mental retardation or the uncontrolled activity of ante-rationality also display irrational intentions, volitions, affects, thoughts and states of consciousness. Subtle irrationality, because of its pervasive activity, is more difficult to identify. Here the hallucination is mental, in particular the projection of the imago of the eternal substance. It always involves fixating some object, some subject or both. It can be conscious, as in metaphysical realism or metaphysical idealism, or unconscious, as in the uncritical, untrained conventional mind of "homo normalis". But one cannot introduce an abstract without a logical leap from a finite set to an infinite set, without the "deus ex machina" or "trick" to save the corrupt plot. Very subtle irrationality hallucinates a hallucinating being. However, in critical philosophy, no reified concept of emptiness or reification of emptiness are possible, for the world is a sea of process. § 2 Transcendence & Art. The sublime is beyond excellence & exemplarity combined. As an intensity of meaningful presence, it captivates every moment of consciousness. Offering clarity, it puts interdependence to the fore. Empty of itself, it is the all-comprehensive prehension of otherness. β. Sublime works of art unfold a unique evolutionary process of spiritualizing states of matter and testify of the continuous process characterizing the natural, nondual, Clear Light* mind. They are our grand ancestral examples. They do not coerce, nor do they unfold in any hesitant way. They are more enduring cultural compounds, bringing the laws of beauty to their highest efficiency & finality. γ. If art, the making of beautiful objects, is a medium for the direct experience of emptiness, then the tale of un-saying can indeed be told, not only with symbols, but also with icons & signals as in a "Gesamtkunstwerk". In terms of the written text, the poetic style excels as a potential carrier for all possible mystical elucidations. Poetry, in addition to, or in lieu of, its apparent meaning, adds aesthetic features to any text. Sensate aesthetic features are denotations based on sensation. Evocative aesthetic features are affective, volitional, cognitive & conscious connotations based on denotations. Excellent poetry combines these features in an exquisite, functional whole. Aesthetic judgement of excellence is not based on the aesthetic features themselves, integrated as they are in an excellent organic whole, but on their total or partial aesthetic meaning. Turning free creativity into symbols, icons & signals, excellence points to qualities beyond the conditions imposed by sensation. A higher-order form is at work. All what matters, is the way these differential changes in exquisite aesthetic features are an expression of consciousness. One does not seek beauty (as in pleasure & satisfaction), but shows how beautiful beauty is (as in excellence). The exemplary moves further. ζ. Poetry moving beyond excellence is exemplary. The aesthetic judgement of example is based on a spectrum of possible abstract forms of harmony, ranging from the entirely subjective to the entirely objective. These abstract forms, rooted in transcendental aesthetics, are necessary and formal (cf. Criticosynthesis, 2008, chapter 5). The transcendental object is a sensate object, a text, the subject an expressive poet. All harmonisations necessarily involve this pair. Positing, comparing, denying, uniting & transcending are the five models of harmony. The sublime moves further. • Positioning : affirming the object without the subject or affirming the subject without the object ; • Comparing : considering the object more than the subject or considering the subject more than the object ; • Denying : rejecting the object or rejecting the subject ; • Uniting : identifying object with subject and subject with object ; • Transcending : zeroing out of all harmonization, without object or subject. η. Beyond excellence and exemplarity, poetry is sublime. When an artist displays his or her natural mind of Clear Light*, sublime realizations result. In these, everything is permeated with the open potentiality present in the mind of the sublime artist. Poetically thinking this Clear Light* is the object of a transcendent metaphysics, backed by an arguable philosophy of totality and inspired poetry. Clearly nothing truly valid or arguable can be said about the sublime. Because all sentient beings possess the potential to awakening, they all can respond to sublimity. θ. Given the sublime harmony of the mind of Clear Light* cannot be conceptualized, it stands to reason only poetry and great compassion are left. The former is suggestive of its profoundness, while the latter brings about its most cherished intent : to awaken all possible sentient beings. At their best, the holy scriptures of the organized religions, and the "sûtras" of those trying to say something about what cannot be put into words, are examples of such sublime poetry. If not, like all forms of katapathic transcendent metaphysics, they are merely dangerous deceptions. And the same goes for the present speculations ... The value of a poem is for the actual reader to decide. Given nondual cognition is non-conceptual, nothing can be said about the phenomenology of prehension, the cognitive capacity to think in a nondual way, fully entering the wisdom realizing the empty truth of all possible phenomena. Only direct experience remains possible. Breaking silence is merely for apologetic reasons ; as the history of the religions shows. Then the highest level of cognition is monopolized by a katapathic soteriology. To designate the "highest name" to the "highest Being" was a way to conjure it, to allow rationalizations of what cannot be rationalized. Beyond being, non-being, being & non-being and neither being and non-being, this level of cognition does not allow for any labelling or name-giving. Working at the level of direct perception, this prehension is beyond conceptual description. Though it can be felt and though it can direct action, no valid, in this case, arguable statement can be made concerning it. Transcendent metaphysics is not rational but meta-rational. This means it must be poetical, for only poetry allows the sublime to be prehended in a written text. Like music, it has the capacity to evoke a "mandala" or "Gestalt" and its interdependences. Like mathematics, it is a fluid and sensitive structure born out of mental balance. But poetry has no truth-claim, no conceptual stability and no a priori logic. Swimming the free style, sublime poets merely point out, but do not instruct. This medium is excellent for all possible spiritual elaborations using conceptual reason (and so text). Never dogmatic but ever discreet, sublime poetry is only revelatory in the sporadic spur. It builds no Babel. 1.4 Ontology. "Philosophers can never hope finally to formulate these metaphysical first principles. Weakness of insight and deficiencies of language stand in the way inexorably. Words and phrases must be stretched towards a generality foreign to their ordinary usage ; and however such elements of language be stabilized as technicalities, they remain metaphors mutely appealing for an imaginative leap." - Whitehead, A.N., PR, § 6. Let us take heed of this warning. The speculative study of those features shared by all possible actual occasions is not a science. It does not advance any sensate object, but, when valid, merely brings greater order in and larger scope to our mentality or set of mental objects. This is provisional and dependent of the advancements of science. Process philosophy devised a very specialized technical language to explain the phenomenology of actual occasions, making it for example suitable for metaphysical inquiries into quantum mechanics. This is possible because despite technicalities, metaphysics in general and ontology in particular call for an imaginative leap. Grand stories are told because they inspire, not because they are eternally true. In his Physics, Aristotle deals with material objects or entities. Metaphysics, "what comes after Physics", takes as its object the immaterial, non-physical entities (beyond or behind the physical world), with theology at its core. Moreover, Metaphysics also studies being in general or being as such, i.e. the study of what is shared in common by all possible entities. This "first philosophy", dealing with the most basic principles based on what all possible things share, is a study of being qua being, leading to the most general concepts or categories of being. What being makes beings be ? Christian philosophy (sic) forged an alliance between theology and this first philosophy. The God of scripture was deemed that Being. He sent His holy word for humans to follow and all the rest of it. In the XVIIth century, first philosophy divorced theology and became general metaphysics. In 1613, the term "ontology" was coined as another name for "metaphysica generalis". And so this became the task of ontology : what do all possible beings have in common ? Process ontology asks : what do all possible mental & sensate processes have in common ? And when this is established : What is there ? and What is truly there ? These questions inevitable leads one to ask : What is the absolute ? Theo-ontology is thus merely an instance of ontological inquiry. A. Defining Ontology without the Nature of Being. Before Kant, "General Metaphysics" or ontology was substantialist, essentialist and so seeking a self-sufficient ground, i.e. the self-sustaining & final substantial level of all what is. The presence of such an independent, autarchic "hypokeimenon" was not in doubt. To seek a "ground" goes with the territory, for ontology determines the common features of every possible thing. But to define these general concepts covering all possible phenomena as (a) existing from their own side and (b) forever remaining the same or permanent, is the cul-de-sac of pre-critical metaphysics. We need a sufficient ground, but not a self-sufficient one. In an absolutist view, valid science & valid metaphysics are eternal. So the absolute nature of all possible phenomena must be eternal too. Hence, the ground of this understanding concerning the general features of all what exists must be something permanent, substantial, essential. Criticism unmasks this eternalization assumed by substantialist foundationalism as an illusion. The common ground argued by ontology is a speculative understanding of what all phenomena share. This metaphysical knowledge, even valid, is not lasting, but, as all conventional knowledge, valid or invalid, provisional, relative and likely to change. Pre-critical metaphysics, unwilling to embrace radical nominalism, was unable to conceptualize a non-substantial ground. The origin or "arché" was eternal, unchanging, own-powered, with a nature existing by its own, with inhering properties. This own-nature is either an objective "substance of substances" or a subjective "self", both possessing their own-form or isolated, essential, unique & unchanging character. Single, dual or triadic, the first principles of the ontology of old were substances. Thinking a non-substantial ground is affirming it is not self-powered but other-powered and "present" since "beginningless time". Given physical space & time came into existence with the Big Bang, the ground of the totality of the world, also called the ultimate ground of phenomena or world-ground is virtual or potential, i.e. nothing with the potential to become something. It is not a primordial or ultimate cause of the world, but its mere possibility. This virtual world-ground is the infinite set of propensities making the finite actual next moment of the world possible or likely. The world-ground is not another ontological order, "hidden variable" or a different substantial & deterministic world behind, beyond, before or within the world, for there is only one ontological order, namely the world of actual occasions. It is more like an abstract, virtual world preparing concrete actuality. If there is only one world, as naturalism extols, then the ground of Nature cannot be an ontological explainer, an ultimate self-sufficient cause abiding in a "Hintenwelt", for there is no Platonic "chorismos" or rift between two ontological worlds. Hence, there is no God creating the world "ex nihilo". Paying compliments to God* is one thing, but taking them serious is quite another ! Before the world physically existed, the primordial quantum plasma pre-existed as one of the three non-temporal, infinite formative elements characterizing the infinite world-ground (together with primordial architecture and primordial sentience). In process cosmology, these designate the limitless possibility, potential, likelihood or propensity of creative disturbance, deflection or "clinamen" of selected probabilities, making another Big Bang (after a Big Crunch or Big Evaporation) very likely. Both world and world-ground make out the world-system. The world is the sea of concrete actual occasions rising from infinite possibilities, featuring primordial matter, abstract forms of creativity (unity) & absolute sentience (the dual-union of the nondual mind of Clear Light* of the absolute mind of all possible enlightenment of all possible mindstreams). The world-ground is called "ground" because of these formative elements, covering potentialities pre-existing outside space and time. This non-temporal & non-spatial order of propensities is grasped in terms of the fundamentals of the possibility or probability of process, but then in absolute terms : absolute sentience, the creative laws of the world and the primordial quantum field. In this way, these three formative elements of the world tie in with the three ontological aspects of every actual occasion at work in the mundus ; matter, information & consciousness. The world of actual occasions hic et nunc is like the "music of the spheres", the actual ongoing cosmic symphony of togetherness of countless interdependent actual occasions. The world-ground is then like the "voice of the silence", the material, creative & sentient probabilities or potentialities making possible the next moment after this moment in the infinite histories of the worlds. § 1 Place of Ontology in Metaphysics. α. Critical process ontology asks  this : What do all possible mental & sensate processes have in common ? The answer to this question, aiming at what all objects share, directly influences the outcome of any metaphysical inquiry. It determines the fundamental concepts of the worldview in question. Any error at this level harms the precision of the arguments targeting specific objects. But given a well argued ontology, the general argument dealing with the totality of the world cannot fail (for if not derived from this, dependent on it). β. No theoretical philosophy features strong, coherent unity without a valid ontology. A general perspective cannot be derived from a finite set of specifics. It has to be solemnly inducted. This is an intuitive, creative moment. Eliminating ontology from philosophy is like painting without paint. The soundness of ontology reflects on the coherency of the worldview. Logic and argument are all what are left. In both cases, the choice of logic is paramount. This brings in the question of style (cf. supra). γ. Ontology makes a fundamental choice. It designates the ultimate object, namely the one object or ontological principal shared by all possible phenomena. Reifying the object/subject relationship necessary in all possible cognition, classical ontology invented substantial objects and/or substantial subjects, acting as self-sufficient anchors to stabilize their foundationalist systems of being. This resulted in (a) the substantial, ideal (super)subject of subjectivism and its spiritualism, (b) the substantial, real (super)object of objectivism and its materialism (or physicalism) or (c) the substantial duality of rationalism, with matter interacting with the non-physical mind. δ. The fundamental choice is intuitive. Singling out a common feature calls for a creative act explained in the course of its well-formed elaboration, defining a hermeneutical circle. This cannot be avoided. The Eureka !-moment cannot be caused. Neither is it void of determinations and conditions. In the past, the extremes of spiritualism & materialism excelled in the drama of knowledge. Derived from reductionism & foundationalism, these metaphysical extremes have had their best time. Instead of identifying their first ontological principle with either the object (materialism) or the subject (spiritualism) of the concordia discors, ontologies of the extremes are avoided by asking what both object and subject have in common ? Hence, materialism & spiritualism are unmasked as incomplete answers derived from an unsuccessful reduction (of mind to matter or of matter to mind). ε. If something exists, it does not merely exist because it appears to an observer to exist. Absolute idealism is rejected. If something does not exist as a substance with inhering properties, is may exist as some thing in process. Relative (conventional) realism is retained. The Middle Way fares well between the extremes of absolute affirmation and absolute negation. Because all objects are deemed to share a finite set of first principles, ontology is "first philosophy". These first principles orient all further possible speculation. Process ontology seeks a series of concepts dealing with the fundamental properties of all possible phenomena. The latter are deemed processes, not natures (or substances). Given the two sides of the transcendental spectrum of conceptual rationality, classical ontology reduced either subject to object (eliminating mind, as in absolute objectivism) or object to subject (eliminating matter, as in absolute subjectivism). Subject-ontologies fail because they cannot explain the tenacity of some sensate objects. Object-ontologies fail because they cannot operate without a subject possessing its object. Process ontology wants to establish the common ground between subjectivity (mind) and objectivity (information, matter). It finds this in the concept of "actual occasion" or isthmus of actuality. This is a moment x of that what exists hic et nunc with differential extension x.dt § 2 Objects of Ontology : What is There ? α. When the view, in casu process-based phenomena, has been established, ask : What is there ?  The exactitude of objects, their quality of having high accuracy & consistency, refers to their ontological status, namely to what kind of object is at hand. Four categories of objects are distinguished : (1) absolutely nonexistent objects, (2) fictional objects, (3) relatively existent objects and (4) absolutely existent objects. β. Absolutely Nonexistent Objects : That Which Is Not. When an object does not exist, nothing can be identified corresponding to it and so nothing ostensibly refers to it. Absolutely nonexistent objects are always analytical nonexistent objects involving a contradictio in terminis. They are a forteriori nonexistent in an absolute sense. A square circle, a triangle with four angles, a curved flat space etc. cannot correspond to anything, although by themselves the words "square", "circle", "triangle", "angle", "four", "curved", "flat" and "space" do make sense. But when combined, a mental clash occurs eliminating any possibility of even imagining something associated with the combination. The void is not the empty set of potentialities, of nothing (infinite emptiness) becoming something (finite fullness). γ. Fictional Objects : That What Deceives. Fictional objects like Hamlet are deemed not to exist, although in Shakespeare's play called "Hamlet", the Prince of Denmark is a leading character. Nobody versed in English literature agrees with the statement nothing is aimed at when the name "Hamlet" is mentioned, but when asked where Hamlet precisely lives, no answer can be provided ! He is not in Denmark, nor does he "exist" in the text of the play named after him. But when the play is actually performed, no member of the skilled audience will have any difficulty identifying Hamlet. γ.1 In the case of the unicorn, we assemble two existing objects (namely a white horse and a large waved horn) and this combination exists in our imagination. Sometimes these objects are merely a private fantasy, sometimes they can -through trickery- be made intersubjectively available. Indeed, before recent times, the horns of a rabbit, the hairs on a fish, the wings of a turtle, a unicorn or a pink flying elephant, etc. could not be pointed at as moving and/or three-dimensional objects. By rapidly projecting digitally manifactured pictures on a white screen, any fiction conjured by our imagination may be generated on it. Even depth can be holographically manifactured. In that way, what used to be merely private imagination can be made intersubjectively available "on screen" repeatedly. While nothing more than tricks with artificial light, these objects may move us, influence us and prompt us into action. γ.2 Fictional objects are either private or public. Dreams and personal fantasies, ranging from the fruits of a fertile imagination to psychotic hallucinations are not available to others. They can only be identified by the subject to which they appear. Nobody else is available to grasp at them. They nevertheless exist as fictional objects. Intersubjective imaginal objects, like fictional characters, cinematographic objects, artistic objects, collective projections or objects appearing as the result of collective hypnosis, also exist because one can indeed aim at them, but this identification is intersubjective, very limited in time, unstable and, most importantly, based on a trick, i.e. an intended deception. γ.3 Fictional objects exist because a conscious agent intends to fool. To do so, elaborate trappings are introduced. These may be physical (mechanical devices or electronic systems), or psychological (as in suggestion, hypnosis and placebo). Without this intent to trick, i.e. to misrepresent reality, positing something which cannot possibly be there, fiction would not exist. Summarizing : fictional objects are relatively nonexistent objects. Relatively Existing Objects : That Which Conceals. Relatively existing objects are those apprehended by the normal waking consciousness of most, if not all, human beings. These are sensate objects and non-fictional mental objects. Their "normality" is defined statistically (a majority apprehends them as they appear), normatively (given all necessary conditions, they must be apprehended as they do) and existentially (their apprehension is co-relative with a particular observer). They are mostly intersubjective, relatively stable, nominal, conventional and independent of conditions put in place with the explicit intent to deceive. They can also be intimate & private, or reflective of automatic & unconscious activity. Except for non-fictional mental objects (like accurate memories, the activity of imagination, volitions, affects, thoughts and states of consciousness), they are always shared with other conscious agents. Although they change as a function of spatio-temporal conditions, these alterations may be slow, small and nearly imperceptible, as in the extreme case of a mountain, the life of a star or the existence of the universe. They may be quick, large and obvious, their existence deemed ephemeral, fleeting or transient, as is the case for climatic conditions or the position & momentum of observed atoms. These objects define what we understand by "normal" reality, one shared and delimited by others, and hence conventional. These objects are not fabricated or manifactured by any human intent to deceive others. They are what is nominally "given". δ.1 Among these conventional objects, some misrepresent physical reality without the artificial intention to deceive. They may be optical illusions one can eliminate, as when a stick immersed in water -merely appearing as very large- is removed from the water. Maybe they cannot be turned around, as the apparent daily movement of the Sun, actually the rotation of the Earth on its axis, or a Hunter's Moon. Maybe these objects are no longer validated by science, like caloric fluid (phlogiston) flowing from hotter to colder bodies. Among conventional objects, some temporarily represent existence in a valid way. These are the objects of science. The validation of these objects is defined by the principles of logic, the norm of theoretical epistemology and the maxims of the process producing valid knowledge about relatively existing objects. δ.2 The objects of science constitute the valid paradigmatic knowledge of the historical era in which these conventional objects appear. They represent the common ground between experimentation and argumentation, between being regulated by, on the one hand, an idea of truth focusing on the supposed correspondence between theory and conventional objects, and on the other, an theory of truth regulated by the idea of the consensus between all involved sign-interpreters. A sign-interpreter is a conscious, cognizing consciousness operating signals, icons and symbols in a well-ordered way, according to principles, norms & maxims producing meaning by way of meaningful glyphs, or states of matter infused with information. δ.3 Relatively existing objects or conventional objects appear as inherently existing outside the subject apprehending them, inviting the division between "inner" and "outer". In this valid but mistaken view, they seem independent, self-powered, and existing from their own side, by their own "inner" nature, essence ("eidos"), substance ("ousia"), or own-form ("svabhâva"). But as ultimate logic proves (cf. infra), this is merely an appearance concealing their suchness/thatness or that what they truly are. These conventional objects do not appear as they truly are, and so conceal their ultimate, implicit process-nature lacking inherent own-form. This is the case for all fictional and conventional objects. Even when the stick is removed from the water, and thus smaller than it was when immersed, its conventionality still conceals its suchness/thatness. While (a) a deception, (b) the subject of an optical illusion (immersed) and (c) a valid scientific object, the solid stick continues seemingly not to depend on conditions outside itself to appear as it does, independent & localized. It still manifests as an object "out there", cut off from its observer. But when prehended in the nondual mode of cognition, each object is simultaneously cognized as empty of substance and fully interdependent. This means the absolute nature of each object is nothing more than one of its properties. ∫ Again : the ultimate exists conventionally. ε. Absolutely Existing Objects : That Which Is What It Is. These objects are apprehended by the wisdom mind of Clear Light* no longer bewitched by the illusion posed by any objects. Such a mind directly sees the suchness/thatness or full-emptiness of all phenomena, i.e. simultaneously apprehends how all phenomena (a) are empty of themselves and (b) full of otherness. Classifying what exists brings about two broad sides ; the conventional and the ultimate. Conventional truth is conceptual and rational, based on experimentation and argumentation, on valid science and valid metaphysics. Ultimate truth is non-conceptual and intuitional, based on direct nondual prehension, on argumentation and sublime poetry. Transcendental metaphysics does not argue, but merely points at the Moon. In philosophy, both truths are in fact epistemic isolates (of the conventional and the ultimate aspect of every object, of its full and empty properties). In mysticism, they are the datum of a direct and unitary experience, prehending them simultaneously and this ongoingly (swimmingly). § 3 Monist, Dualist & Pluralist Ontologies. α. The fundamental ontological choice is either monist, dualist or pluralist. Only one, only two or more than two fundamental ontological principles prevail. Mindful of Ockham, the monad is preferable. By adhering to parsimony, the number of ontologically different entities is limited. β. The monist posits a single fundamental ontological principle. This is the most clear-cut and economical choice. If such a principle can be found and argued, a well-formed ontology ensues. With a single principle, all possible entities share the same fundamental ground and so can fully participate in each other ; their differences are nothing more than a measure of their distinctness. No ontological differences exist. γ. With more than a singularity, difference and distinctness are no longer the same. Ontological differences divide the world up in as many fundamental principles as designated. Dualists, like Plato & Descartes, settle for two fundamental ontological principles. Leaving the monad, his ontology mirrors the epistemological dyad of knower & known characterizing knowledge. From this point on, a dangerous confusion creep in : How can two ontological different principles explain the unity of the world ? If two things radically differ (grounded by separate principles), how can they exist together or form any relationships ? How can they ever interact ? γ.1 In neurophilosophy, this question is rephrased. How can the non-physical mind interact with the brain without breaching the energy-conservation law of thermodynamics ? A dualist ontology mirrors the ongoing tensions of conceptuality and does not succeed in explaining the unity of the manifold. In Platonism this problem is more or less solved by identifying the world of becoming as an illusion, a pale reflection of the true world of ideas. γ.2 In Cartesianism, the problems related to this duality eventually results in reductionism, privileging the physical (as in the realism of materialism & objectivism) or the non-physical (as in the idealism of spiritualism & subjectivism). The pluralist tries to solve the basic ontological problem of dualism by introducing a "tertium comparationis". A closure is at hand, one leading to a triune concept, reflecting, to invoke synthesis, the third factor back to the first, the triad to the monad. Without this return to unity, only the triad is given and by addition of unity the "Ten Thousand Things" follow. Moreover, adding one or more fundamental ontological principles does not eliminate the basic ontological problem facing duality. On the contrary, to explain the difference between two elements with a third invokes another difficulty : how can two different factors be bridged by another different factor ? This seems like multiplying problems. ε. The proposed process ontology is a monism. Only a single ontological building block is assumed and called "actual occasion", the momentary actuality characterized by extensiveness. This moment, instance or droplet of Nature has properties. These can be understood when the temporal extension of any duration is progressively diminished without arriving towards a duration as its limit. Such understanding is an abstractive set converging to the concept of Nature at an instance. ζ. Something is always going on everywhere, even in the so-called empty space of Torricelli. Nature abhors the void. Both the electromagnetic field & the lowest energy state (or uniform zero-point field) evidence the absence of an absolute vacuum in physics. η. "Actual occasion" is the building-block of process ontology, the differential phenomenal moment (as particle) starting the stream of moments (as wave). All entities share actual occasions. In ontology, the monist has the advantage. The totality of all phenomena in actuality is understood in terms of a single ontological constituent, thereby simplifying the basic ontological scheme. The issue here is not to explain difference, but to assure the complexity of the manifold can be prehended from the vantage point given by a single constituent. To ensure actual occasions are conceptualized to accommodate rather than to hinder their creative togetherness with other actual occasions, process ontology seeks a phenomenology of the actual occasion. The void does not exist. In empty space, energy is present. Substance cannot be found. The fullness of the mundus is given as the interdependence between all actual occasions entering each other's histories. The emptiness of the world is the absolute absence of self-powered, inherently existing objects with their likewise eternalized properties. This emptiness is not an entity, but merely a property of every actually existing thing. § 4 Failures of Materialist & Spiritualist Ontologies. α. Reductionist monism cuts existence in half. Add essentialism and one half is imputed as the self-sufficient ground, the other half is denied or deemed illusionary ("mâyâ"). All possible subjects of knowledge (knowers) possess objects belonging to two and only two mental categories, namely "sensate" or "mental". α.1 Materialist (realist) monism considers sensate objects to be fundamental and mental objects merely derived or emergent (with no downward causality). In its essentialist version, matter exists from its own side, independent & separate from the subjects apprehending it. α.2 Spiritualist (idealist) monism considers mental objects to be fundamental and sensate objects constituted by the former. In its essentialist version, the "Geist" exists from its own side, independent & separate from the objects it constitutes (as a Creator-God of sorts). Both reductionist strategies fail to explain the totality of the world-system, both as actuality & as possibility. Materialism cannot explain the transcendental unity of apprehension with the manifold, and spiritualism cannot explain the manifold by way of the intra-mental alone. β. Materialism fails to apprehend the intra-mental subject of experience correctly. The impact of conscious choice on material process is either non-existent or of no importance. If it does accept the reality of the non-physical in its own right, it cannot deliver a material (efficient) process to explain these non-physical (final) determinations. Moreover, the unity of the manifold cannot be explained by matter alone. If materialism is "true", then neither are logic & argumentation possible ! Hence, a priori materialism cannot provide its own apology. Bound to become dogmatic and in alliance with the media power and money, materialism is as grotesque as the ecclesiastic powers of old. γ. Spiritualism fails to apprehend the extra-mental object of experience correctly. The efficient determinations of material process on the non-physical are evident. To be cognizing, the mind has to possess an object. This is not an intra-mental but an extra-mental entity. To explain the working of free will without the laws of matter, or worse, to allow matter to be constituted by mind, cripples our understanding of the reality of the physical. Moreover, the variety, differentiation & multiplicity of Nature cannot be explained by the unity of the mind alone. Tending towards unity, mind cannot be made responsible for all possible physicality without damaging the rational understanding of the world. In its dogmatic form, spiritualism verges on the irrational. What can be worse than fools & folly running the world ? Process ontology does not seek its fundamental principle in either the mental or the extra-mental. The object/subject dualism is left intact and a deeper common denominator is found : the actual occasion. All phenomena, objects, events, entities etc., in short : all in existence is basically an actual occasion. Objects are moments with certain extensive properties and creative advance. Materialism and spiritualism fail to face the whole world. These are ad hoc monisms. They stop their analysis by reduction, not by integration. The latter means as many phenomena as possible are made part of ontology. Exclusivism becomes inclusivism. There is always something going on then and there. This is the one unit factor in Nature. Whether objects are mental or sensate, both can be reduced to actual occasions of which they are merely aggregates. § 5 Voidness, Emptiness & Interdependence. α. The absolutely nonexistent is the category of the collection of nothing at all. The empty set thought of as absolutely nothing with no potential whatsoever to become anything is called "the void". β. The void is an empty set with no possible members. Emptiness is the set of nothing becoming something. The void does not exist. Emptiness exists as pure potentiality, possibility or probability (the likelihood of something). In what follows, the concept "empty set" only refers to emptiness. If the empty set with no possible members is meant, the term "void" will be used. γ. All numbers can be bootstrapped out of the empty set by the operations of the mind. Suppose the mind observes the empty set. The mind's mere act of observation causes the set of empty sets to appear. The set of empty sets is not empty, because it contains the empty set. By producing the set containing the empty set, the mind has generated the first number, or "1". Perceiving the empty set and the set containing the empty set, the mind apprehends two empty sets and has generated the second number, or "2" out of emptiness, etc. upward to infinity. δ. The entire natural number system can be generated by the play of the mind on emptiness and this in the absence of the need to refer to anything material or countable. Numbers are non-physical phenomena making no reference to physical systems for their existence. Numbers do not exist from their own side (as Platonic ideas), but dependently-related manifestations of the working of the mind. ε. Nothing comes out of nothing (the void) ; "ex nihilo nihil fit" ! Cosmology & physics cannot touch the question of the before of the Big Bang. As time & space commence with this singular explosion, to ask what was before is deemed nonsensical. But logically, any term is subject to a certain order or sequence. Ontologically therefore, the issue can thus be approached in terms of a logical progression and as such make perfectly sense. ε.1 If before the Big Bang nothing is identified (or identifiable), then the void logically precedes the becoming of the physical universe. But if this is the case, then the Big Bang could not have happened. The fact of this singular beginning of the physical universe and the void as absolute nonexistence are thus incompatible. If there was absolutely nothing before the Big Bang, not even the possibility of something, then the Big Bang would be nonexistent too. But this science tells us is not the case. ε.2 To consider the Big Bang ontologically, emptiness must pre-exist. Not as any thing, i.e. as any concrete, worldly actual occasion, but merely as the potential or virtuality of such actuality. The potential of the Big Bang lies hidden in the world-ground, the mere possibility of the next moment of the world. What primordial determination & conditions made the Big Bang possible ? These formative abstracts are primordial operators conditioned (not by their own-natures like in a co-substantial Divine Trinity) but solely by their primordial interrelatedness or virtual togetherness. ζ. The absence of substantial existence is the absolute property of all possible objects. This means the object is empty of an inherent nature or own-form, but this in full participation and togetherness with other objects. In this immanent approach, emptiness is merely a non-affirmative negation of substantiality. But for those having a direct experience of this transcendent signifier, emptiness is the potential to connect every thing with every other thing. And when the emptiness of the mind itself is seen, it is observed as the Clear Light* inseparable from the world-ground, the virtual pre-existence of the next moment of the world. Emptiness is not something, but nothing becoming something. When a concrete, worldly actual occasions emerges, there is no longer (virtual, formative) emptiness but full, actual interdependence. This nothingness of emptiness cannot be absolute nothingness (the nihilism of the void), but merely absence of own-form with the potential for infinite interactions shaping a unique plenum. Note this : the potential of emptiness, of form emerging out of the formless, cannot be apprehended but only prehended. Its experience falls therefore outside science and immanent metaphysics. "Seeing" emptiness is directly observing how absence of own-nature fosters creative advance through increased togetherness of actual occasions. Only non-conceptual, nondual prehension possesses such an absolute object. As identifying absence of own-form is conceptual, ultimate logic is no doubt "philosophical". Given "seeing" emptiness involves non-conceptual cognition, it may be called "yogic" or "intuitive". The former is given to all intelligent beings. The latter to those enjoying the hard work of their emancipation. B. Perennial Ontology ? Perennial philosophy cherishes the idea that within all spiritual traditions & religions, a mystical stream is present, acting as the repository of the wisdom of humanity after it made contact with a supernatural, basically non-physical higher-order reality. Although in general terms this is correct, a divide can be identified. The phrase "perennial philosophy" was coined by Agostine Steuco, a Catholic Bishop and Old Testament scholar, who, in 1540, dedicated his De Perenni Philosophia Libri X to an effort showing how many ideas of the sages & philosophers of Antiquity were in fact in harmony with the "magister fidei" of Catholicism in general and with the teachings of the Roman Church in particular. Later Leibniz would also reintroduce the phrase. It cannot be denied speculative activity has architecture & momentum. So certain recurrent regularities and logical organizations (software or information) can indeed be identified. Western philosophy is rooted in Antiquity, and -in the case of Europe- was directly influenced by the sapiential wisdom-teachings of the Ancient Egyptians (cf. The Maxims of Good Discourse or the Wisdom of Ptahhotep, 2002). Add to this the "Greek miracle" and the "wisdom" coming from the Middle East and the Far East via the trade routes, then a common Western vision may be discerned. The ante-rational, multi-millenarian storehouse of experience of the Ancient Egyptians (cf. Hermes the Egyptian, 2002), and their "magic" of sacred words (cf. the hieroglyphs and their power : To Become A Magician, 2001), inspired the "minors" of the syllogistic inferences loved by the Greeks, an activity spawning their concept-realism. This Greek synthesis formed a common tread in Western spiritual thought, Hellenizing Hermetism, Judaism, Christianity & Islam. Until recently, it remained even unchecked at work in materialism, instrumentalism, scientism & materialism. Western intellectuals maintain a common ontological interest. Likewise, Eastern philosophy (in India, Tibet, China, Japan, etc.) outlines a common metaphysical & ontological view. Perennial ontology, as a common view on things, can only operate if the common denominator covers what is shared by humanity East & West. No doubt this is a considerable amount of information, rooted in the perennial pre-Neolithic shamanistic  environment (involving the return to the "first time" of myth by way of mythical thought). Nevertheless, perennial ontology must also consider the "Dharma difference" between both visions. Grosso modo, the West tries "to save" a self-sufficient common ground. This is a substance possessing its properties from its own side, inherently, separately and independently from other things. The West emphasizes the objective features of this self-sufficient ground. This substantial own-nature is an essence ("eidos", "ousia", "substantia"), exists inherently, by ("causa sui") and on its own (absolute aloneness). A kataphatic theology (cf. infra) is possible. By and large, the East, foremost trying to clarify the subjective features of experience, turns inward. The experience of a "fourth state" ("turîya") of consciousness besides waking, dreaming & the dreamless sleep, dramatically shaped the speculative endeavours of Jainism, Buddhism & Vedânta. As a consequence, the impermanence of determinations & conditions leading up to subjective experiences was strongly felt and thematized. This gave rise to the important difference between "Dharmic" and "non-Dharmic" views. In the former, held by Taoism & Buddhism, all "dharmas" or existing things only possess interrelationality or togetherness, but no enduring substantial essence whatsoever. The presence of the Dharma difference divides perennial ontology in two sets of views ; on the one hand, the substantivist, own-nature view, on the other hand, the dharmic, process view. This distinction returns in contemporary philosophy as the divide between, on the one hand, materialism (physicalism, instrumentalism, scientism) and, on the other hand, the philosophy of relativity, quantum mechanics, chaos theory and process thinking. Process considers only architecture (software or information), momentum (hardware or matter) and sense (userware or consciousness). Besides the continuous ongoing togetherness of these three operators and the creative advance or novel togetherness of all aggregates of actual occasions, there is nothing. Not a single substance can be identified. Under ultimate analysis, all reifications perish. α. In the Old Kingdom (ca. 2670 - 2198 BCE), the virtual clause "n SDmt.f", i.e. "before he has (had) ..." or "he has (had) not yet ...", was used to denote a prior, potential nonexistent state, namely one before the actuality of that state had happened. To be nonexistent, precludes actual existence hic et nunc, but does not preclude the possibility of becoming existent (expressed by the verb "kpr", "kheper", "to become", which also means "to transform"). β. There is some thing before every thing, pre-existing before the order, the architecture and the life of creation. This is called "Nun" (cf. Liber Nun, 2005). The world manifested as a transformation or change from this nonexistent, virtual state to an existing actuality. The virtual state is therefore not actual, but informs possibility, latency and potentiality. As a potency anterior to creation, the Egyptian theologians of Memphis, Heliopolis, Hermopolis, Abydos and Thebes conceived this pre-existent state as something very special, a primordial state existing before "form", i.e. anterior to space and time, and so before the creation of sky, Earth, horizon and their "natural" dynamics. γ. The virtual, pre-existing state is not the origin of order. It cannot serve as a self-sufficient ground ! The emergence of the world, of light and life are envisaged as spontaneous (autogenesis) and without any possible determination ("causa sui"). γ.1 Precreation is the conjunction of this undifferentiated state and the sheer possibility of something pre-existing as a virtual, autogenous singularity called "Atum". γ.2 Precreation is this mythical dual-union of dark Nun and clear Atum, of and infinite, undifferentiated energy-field and a primordial atom, monad or self-powered and self-sufficient absolute singularity. Atum is the "soul" (or "Ba") of Nun ! The efficient power of pre-existence. Creation emerges from a monad, floating "very weary" in the dark, gloomy, lifeless infinity of Nun. Within the omnipresent oceanlike substance of Nun, the possibility of order, light and life subsists as a pre-existing singular object capable of self-creation "ex nihilo". Hence, although Nun is nowhere and everywhere, never and always, it is the primordial, irreversible and everlasting milieu in which the eternal potential of creation creates itself. δ.1 With this distinction, the Ancient Egyptians had divided what creates and is not created (Nun) from what creates and is (self)created (Atum). The next step, namely between what is (self)created (Atum and his Ennead) and what is created but does not create (the world) is also made. δ.2 The whole order of the world needs to "return" (by means of the magic of the "Great House" or Pharaoh, the divine king) to the primordial moment when Atum creates Atum and -within Nun- the world with its order (Maat) came forth. ε. The Greek philosophical mentality was unique, but it did not come forth "ex nihilo". It was the result of the network of forces triggering the so-called "Greek Renaissance", based on traditional Minoan & Mycenæan elements, but made explicit by a series of "new" concepts derived from Mesopotamia, Iran and, last but not least, Ancient Egypt. ε.1 According to Anaximander of Miletus (ca. 611 - 547 BCE), the cosmos developed out of the "apeiron", the boundless, infinite and indefinite (without distinguishable qualities). Aristotle would add : immortal, Divine and imperishable. ζ. The self-sufficient ground sought by the Pre-Socratics is "arché", "phusis", "kosmos", "aletheia" (truth) & "dike" (justice). For Homer and Hesiod, the sky or "Ouranos" is a brazen roof or a seat set firm. The Greeks, with a few exceptions like Heraclites  (540 – 475 BCE), could not grasp the continuity of the architecture at work in every momentum, of the style or kinetographics of movement. ζ.1 For substantivists, "solid" and "eternal" per definition imply lack of movement, absence of change or some kind of fixation in a self-sufficient, Olympian ground, an underlying reality ("hypokeimenon"). ζ.2 Seeking this out, irrespective of Platonic or Peripatetic inclinations, is the root of concept-realism and of the Western essentialist and thus eternalizing view on ontology. Serving this view has been the endeavour of Western philosophy until Kant. η. Although the cascade is never the same, it does have some unchanging patterns holding its dynamism away from sheer randomness. Likewise so for the swimmer or the ballet dancer. A stochastic ontology does not preclude eternal, unchanging form, albeit as a form of movement, as a differential equation covering all specifics of an actual dynamic flow of dynamic relationships between movements. ∫ Is the holomovement of a Buddha not the perfection of his or her unique form of movement ? θ. Discovering the sharp blade of the Sword of Wisdom brings the end of all possible reasons for substantialism. This does not leave us with nothing, for some thing is left after substance has been cleared ; this is sheer process, ongoing flows of actual occasions featuring momentum, architecture and sense. Distinguishing between pre-existence and existence, on the one hand, and, funerary ritualism, on the other hand, co-emerged. The first suggestive evidence of this is found in the Cave of Pech Merle (ca. 16.000 BCE). By it, the relative world, given to properly functioning senses and a modular mind, is distinguished from an absolute realm, one deemed to exist "before", "next to", "above" or "behind" these relative states of matter, information & consciousness. In the "natural" mode of cognitive functioning, one given to ontological illusion due to the constant (ab)use of the substantialist instantiation, pre-existence was envisaged as a deeper stratum of existence ; eternal, timeless, spaceless & undifferentiated. In this "dark ocean", a creative potential was afloat. Pre-existence is not a dead nothingness, a void, but filled with the (passive) potential to create (light, spacetime, life & love). In Hermetism, as well as in the Qabalah, pre-existence points to more than just a void. But these metaphysical systems, while abstracting the absolute as a category, fill it with the ultimate essence of God Himself. God is then the "substance of substances" (or "image of images", "power of powers" - cf. The Cannibal Hymn to Pharaoh Unis, 2002). Acting as the world's underlying self-sufficient ground, the ultimate level is substantialized. The same happened in the theologies of the three monotheisms, in Jainism and in Hinduism. The fact this crucial ontological distinction is brought into play is not the problem, its reification is. The world-ground cannot be a substance or the world would never have come into existence. No becoming would have been possible. The presence of this world need not to be explained. How this presence came to be is the question. Logically, what precedes the Big Bang ? § 2 The Logic of Being & the Fact of Becoming. α. Parmenides of Elea (ca. 515 - 440 BCE), inspired by Pythagoras and pupil of Xenophanes (ca. 580/577 - 485/480 BCE), was the first Greek to develop, in poetical form, his insights about truth ("aletheia"). In his school, the Eleatics, the conviction human beings can attain knowledge of reality or understanding ("nous") prevailed. But to know this truth, only two ways were open : the Way of Truth and the Way of Opinion ("doxa"). These are defined in terms of the expressions "is" and "is not". If a thing both is and is not, then this either means (a) there is a yet unknown difference due to circumstances or (b) "being" and "non-being" are different and identical at the same time. This answer is relative (circumstantial) or contradictory. If a thing is not, then it cannot be an object of a proposition. If not, non-being exists ! This answer is pointless. As the last two answers must be false, and only three answers are possible, so the first answer must, by this reductio ad absurdum, be true, namely : the object of thought "is" and equal to itself from every point of view. β. With Parmenides, pre-Socratic thought reached the formal stage of cognition. Before the Eleatics, the difference between object and subject of thought was not clearly established (cf. the object as psychomorphic). Myth and unstable pre-concepts prevailed. Moreover, the basic formal laws of logic (identity, non-contradiction & excluded third) were not yet brought forward and used as tools to back an argument. Logical elegance was absent, and a thinker like Heraclites was deemed "dark". The strong necessity implied by the laws of thought had not yet become clear. But with the Eleatics, the mediating role of the metaphor is replaced by an emphasis on the distinction between the thinking subject (and its thoughts) and the reality of what is known. γ. The idealism of the Eleatics, thinking the logical necessities of thought, nevertheless confused between a substantialist and a predicative use of the verb "to be" or the copula "is". That something "is" (or "Dasein" - x) is not identical with what something "is" (or "Sosein" - x). Properties (accidents) are deemed to exist apart from the "being" of the substances they describe. But as Kant would point out much later, the verb "to be" only instantiates the properties of an object, not a deeper sense of "being-there". For the substantivist, non-being is pointless. The empty set equals the void. Hence, only an all-comprehensive "Being" can be posited. We know Parmenides asserted further predicates of the verb "to be", namely by introducing the noun-expression "Being". The latter is ungenerated, imperishable, complete, unique, unvarying and non-physical ... He did not conceive the absence of certain properties as non-being, nor could he attribute different forms of "being" to objects. What he then calls "Being", is an all-comprehensive being-there standing as being-qua-being, as "Dasein" in all the entities of the natural world (and their "Sosein"). A view returning in the phenomenology of Heidegger. ε. Democritus of Abdera (ca. 460 - 380/370 BCE), geometer and known for his atomic theory, developed the first mechanistic model. His system represents, in a way more fitting than the difficult aphorisms of Heraclites, a current radically opposing Eleatic thought. Instead of only relying on the formal conditions of thought, the origin of knowledge is given with the undeniable evidence put forward by the senses. Becoming, movement and change are fundamental. Hence, non-being exists as empty space, as a void. If so, being is occupied space, a plenum. The latter is not a closed unity or continuum, a Being, but an infinite variety of indivisible particles called "atoms". The latter are all composed of the same kind of matter and only differ from each other in terms of their quantitative properties, like extension, weight, form and order. They never change and cannot be divided. For all of eternity, they cross empty space in straight lines. Because these atoms collided by deviating ("clinamen") from their paths, the world of objects came into existence (why they moved away from their linear trajectories remains unexplained). Objects emerge by the random aggregation of atoms. Things do not have an "inner" coherence or "substance" (essence). Everything is impermanent and will eventually fall apart under the pressure of new collisions. ζ. If all things are atoms, then how can rational knowledge be more reliable than perception ? Moreover, how can atomism describe atoms without in some way transcending them ? In epistemological terms : how can the subject of knowledge be eclipsed hand in hand with a description of this "fact" ? There is a contradictio in actu exercito : although refusing the subject of knowledge any independence from the object of knowledge, the former is implied in the refusal. This important problem is shared by all materialist & mechanistic models. It can be solved by positing a deeper ontological principal (encompassing both object & subject), like the actual occasion, and attributing to this both physical, informational & sentient properties. η. Concept-realism returns under many guises : objectivists versus subjectivists, realists versus nominalists, empirists versus rationalists, physicalists versus spiritualists etc. Every time either the subject of experience or the object of experience is eliminated, crippling one's understanding of the possibility & advancement of knowledge. The conflict is rooted in an ante-rational & substantialist prejudice seeking a firm, eternalized self-sufficient ground existing on its own, in an by itself. Such a ground can however not be found ! To clear obstructions to understanding the mind and its workings, it must be done away with. Critical epistemology realizes the discordant truce as the fundamental fact of reason. With the Greeks, the mythological element was put between brackets and so clearly identified. Science deals with sensate & mental objects only. These operate in a formal way, i.e. irrespective of context. Unlike ante-rationality, Greek rationalism was able to transgress the borders of its own geomentality, and establish international, panoramic perspectives. Discovering both the necessities of logic (operating our mental objects) and the importance of facts, its concept-realism forced it to seek an absolute, substantialist (essentialist) grounding of the objective and/or subjective conditions of experience & knowledge. As a substantial, self-sufficient ground cannot be found, this dramatic quest will never come to an end. For objects merely appear as independent & separate. § 3 Greek & Indian Concept-Realism. γ. With the gradual decline of Buddhism in India from around the beginning of the Common Era, Classical Hinduism emerged as a revival of Vedic traditions. The Advaita Vedânta consolidated by Shankara (788 - 821 ? CE), represents the pinnacle of the revival of Hindu intellectualism during the Gupta Period (4th to 6th centuries) in the North and the Pallavas (4th to 9th centuries) in the South. This was the "golden age" of Indian civilization. Between the 2nd BCE to the 6th century CE, the six systems of Hindu philosophy slowly emerged (viz. Sâmkhya, Yoga, Nyâya, Vaishesika, Mîmâmsâ, and Vedânta). δ. Considering the Absolute in its Absoluteness, i.e. Brahman, the Vedânta is consistent with what in the monotheisms "of the book" (Judaism, Christianity & Islam) is called the "essence of God", or God as He Is for Himself Alone. That God is a Supreme Being can be known (by the heart and by the mind), but what this Being of God truly is cannot possibly be known. His essence is ineffable and remains for ever veiled. The essence of God is only for God to enjoy ! He is the One Alone, for ever separated from His Creation. God and Brahman are the One Alone. Brahman exists as a well-known entity : eternal, pure, intelligent, free by nature, all-knowing and all-powerful. In the root "brmh" resides the ideas of eternality, purity, etc. The existence of Brahman is well known from the fact of It being the Self of all ... for everyone feels that this Self exists (sic). This is the pre-creational, pre-existent Supreme Being, creating the world "ex nihilo". The pivotal difference between Vedânta and the monotheisms is the idea the innermost "soul" or "âtman" is ontologically identical with Brahman, whereas in the West no creature is able to deify to the point of total, absolute identity with God. The realized Vedantin however proclaims : "I am Brahman !" ... ε. Considering the Absolute in its Self-manifestations, Hindu concept-realism makes way for henotheism, for Brahman, the absolute substance existing from its own side, manifests as Îshvara and the latter is grasped as a multiple variety of Deities, all epiphanies of Brahman, or aspects of "mâyâ", the magical force of Brahman. Brahman is a magician and involved in creation, fashioning, sustaining & destroying it. Îshvara (Brahmâ) is the personal face of Brahman, but this face is never singular, but involved with the world in terms of an endless variety of epiphanies. Although Brahman is "without a second", Its personal dimension ("saguna Brahman" or Îshvara) is, as the theology of Amun has it, "one and millions". In the Vedânta, realization is the removal of the superimposition of the illusionary forms on Brahman. In Classical Yoga, enlightenment or "samâdhi" is the elimination ("nirodha") of the last element of flux ("vriti") from consciousness ("citta"). In both forms, the mystic returns to the original, inherently existing station-of-no-station of the Absolute in its absoluteness. It pre-existed, exists and will continue to exist. It is absolutely removed from anything except Itself, completely independent, eternal, imperishable, permanent and therefore the sole "substance of substances". The drama of concept-realism spread over the globe. The objects of reason were ontologized, ideas became things. In the East, the notion of an absolute, inherently existing Supreme Being creating the world was also explained in categorial terms. The six schools of Indian philosophy provide ample evidence of this impact of substantial instantiation on Hindu thought. § 4 The Tao. α. The Tao (cf. The Tao, Emptiness & Process Theology, 2009), has one absolute (non-differentiated) and various relative (differentiated) stages. These stages represent the absolute, self-existent Tao in various moments of self-determination. Each of them is the absolute Tao in a secondary, derivative and limited sense. Great Limitless emptiness Mystery of Mysteries the absolute Tao The One potential non-being or WU The Two potential being or YU actuality dependent Tai Chi Great Ultimate The Five Forces β. The absolute Tao is non-local, non-temporal, non-differentiated, nameless, and empty of substance or inherent existence, without permanent and unalterable distinctions. This absolute Tao is beyond conceptualization and object of ecstatic, nondual apprehension. The absolute Tao is not turned towards phenomena, nor is it wholly self-referential. This "abstract of abstractions" cannot be conceptualized and named. It is Nameless. To reach the ultimate and absolute stage of the Way, we have to negate the opposition between being and non-being, positing "no no-non-being". This level can only be apprehended ecstatically, and this absolutely ineffable if for Lao-tze the "Mystery of Mysteries". Mystery ("hsüan") originally means black with a mixture of redness. The absolute, unfathomable Mystery or "black" does reveal itself, at a certain stage, as being "pregnant" of the "Ten Thousand Things" or "red" in their stage of potentiality. In the Mystery of Mysteries being and non-being are not yet differentiated. Although the absolute Tao cannot be said to be turned towards the phenomena, in this utter darkness of the Great Mystery ("black"), a faint foreboding of the appearance of phenomena lurks ("red"). The Mystery of Mysteries is also the "Gateway of Myriad Wonders". Hence, the "Ten Thousand Things" stream forth out of this Gateway ! δ. When it enters its first stage of "pure" self-manifestation or mere self-determination, Lao-tze admits the One or active non-being assumes a positive "name". This name is "existence" or "being" ("yu"). The latter is also called "Heaven and Earth" ("t'ien ti"). The Way at this stage is not yet the actual order of Heaven and Earth, but only all possible things as "pure" being, i.e. again in potentia. The One begets the Two : Heaven ("yang") and Earth ("yin"), the cosmic duality. They are the self-evolvement of the absolute Tao, the Way itself. The One is the initial virtual point of self-determination of the Way, the Two bring about (as a mother) the possibility or probability of actuality and carries this over into actual reality. In this way, the One is the ontological ground of all things, acting as its ontological energy, while the Two develop this activity ("Ch'i Kung") into a particular ontological structure, Yin and Yang and the Three, i.e. the blending & interaction between these ("Tai Ch'i"). Hence Heaven is limpid and clear, and Earth is solid and settled ... In Chinese philosophy, especially in Taoism, a process-mentality was and is everpresent. Nothingness is posited, but again, within it, a very subtle creative potential is identified (cf. black with a mixture of red). A balance between natural flow & spontaneity (pragmatic naturalness) and emptiness (absence of inherent existence) is at hand. Where India & Tibet favoured the quick release from this world (represented by the dorsal "yang" channel), China focused on balancing the energy by letting it run in an orbit (making the upward movement of the "yang" channel flow into the ventral "yin" channel). This reinforces the life-force ("Ch'i") at the abdomen and aims at the Great Harmony between the powers of Heaven and Earth (at the heart). The wisdom realizing emptiness able to understand these "mechanisms of heaven" as dependent arisings, operates the complete spectrum of human possibilities, not just one. Here, the absolute truth is not the single focus. Hence, the conventional and ultimate truths cannot be turned into a Single Truth. The danger of moving to much upward (toward Heaven) without being firmly rooted (in Earth) does not exist. § 5 The Dharma Difference. α. The notion the world is composed of existing things or phenomena as it were carrying or holding their properties in accord with the cosmic law, i.e. of a certain characterizing nature (cf. "dharmata"), Buddhism shares with Hinduism. It differs though in terms of Buddha's Second Turning of the Wheel of the Buddhadharma, teaching the absolute truth ("dharma") about all phenomena, namely their lack of inherent existence ("shûnya"), the fact of their have absolutely no self-nature or essential own-nature ("nirsvabhâva"). β. Because a perfect understanding of Buddha's crucial wisdom teaching on the fundamental nature of all possible phenomena, one encompassing both the reality of sensuous objects as the subjective ideality of mental activities, is a difficult simplicity, it has led to countless attempts to save inherent existence in some way or the other. Only an absolute negation prevails (cf. the apophatic approach to mystical experience). β.1 Logically (and a forteriori philosophically), the strict Prâsangika-Mâdhyamaka approach found in the work of Nâgârjuna, Chandrakîrti, Shântideva, Atisha and Tsongkhapa is correct & definitive (cf. Emptiness Panacea, 2008 ; On Ultimate Logic, 2009). Hence, the non-affirmative negation of inherent existence eliminates all possible reified concepts. β.2 Experientially however (as Yoga & Tantra put into evidence), a direct non-conceptual experience, gnosis or prehension of the absolute nature of all things is possible. This involves a cognitive act of an absolute bodhi-mind apprehending an absolute object or totality "as it is". Nondual & non-conceptual, this experience is not without knowledge-content. The common treat in the poetical evocations on the basis of such graded meditative experiences involves a world of pure luminosity without shadows & edges, undefiled and unborn, pure and complete, much like "nirvâna", identified as permanent, constant, eternal and not subject to change. β.3 While philosophy remains immanent, yogis & tantrics dance on the rhythms of the poetical tale of the transcendent. These scientists & artists of the inner planes do not prove anything, they merely point out. What a community this would be if those who prove the end of proofs and those who experience emptiness were the same ! γ. In the Flower Garland tradition, in particular Fazang in the seventh century, Buddha's teachings on wisdom are lifted out of the Indo-Tibetan emphasis on the other-worldly, on absolute reality. Absence of inherent existence was laid to rest in the fertile Chinese soil of the magic of the natural world, the quest for longevity, social order and the actual operation of how things exist conventionally, namely interdependent & interpenetrative. γ.1 Because gold lacks inherent existence, a craftsman was able to make an object of it - say, Empress Wu's Golden Lion guarding her palace hall. This gold is "li", principle or noumenon, the gold qua gold. The shape it takes in this case (the lion) is "shih", or phenomenon. Suppose gold would take a bar-shape, then it would actually ceases to be gold in lion-shape. Gold is therefore equivalent to "gold in x-shape" ! Fazang's gold is not above or behind the shape it takes. The Golden Lion is gold, there is no gold behind the lion, nor is the lion an emanation of gold. Gold only exists as having some form or another, in this case Empress Wu's Golden Lion. When the lion shape comes into existence, it is in fact the gold coming into existence ! The shape does not add anything to the gold. γ.2 The phenomenon is the noumenon in its phenomenal form. The ultimate is not elsewhere but here and now, even in the smallest, meanest thing. Ultimate truth exists conventionally. In this brilliant analysis, Fazang makes use of the logical necessity between lack of inherent existence and dynamic (artistic) flow. He does so to integrate strict nominalism within the Chinese vision of enlightenment as living in harmony with the Tao, with the natural flow of all things ("Tai Ch'i"), and this based on the work of "ch'i" ("Ch'i Kung"). Indeed, the word "li" also carries a positive connotation, namely the "true thusness of mind", inherently pure, complete & luminous. The Dharma difference defines a crucial divide. On the one side, we find metaphysical systems seeking out substance and an unchanging, self-sufficient ground existing from its own side with inhering properties. They are "self-advocate" ("âtmavâdin"). Theirs is the substantivist approach. Its futility is unmasked by asking : "Show a substance as defined ?". On the other side, own-form or self-nature is totally relinquished and only the architectures of process remain. Its extreme accuracy is suggested by the precision of Schrödinger's wave-equation. This most fundamental of distinctions defines the ontological principal. This is not inherently existing substance, but interdependent process. The architecture of process implying change is fundamental but not random. If process were merely stochastic, then order would be impossible. Precisely because of the need to explain order did the Greeks and the Ancient Egyptians before them posit a self-sufficient ground. But seeking such a solid foundation has sidetracked Western philosophy since Heraclites, who's message was not understood. No two moments are the same, the "same" river cannot be entered twice. The way up and the way down are, by enantiodromia, the same way. While a cascade is never the same, it can be distinguished from another because of certain constant elements in the way its water moves ... Process thinking identifies the stages of the differential changes as well as their form or style. Random movement (white noise) has no style and so can carry no information. But as soon as movement is coordinated, a structure can be discerned and insofar as this has constancy it can be described and repeated. There is no need for a self-sufficient ground to "stabilize" form, for the stability of change is not a kind of substantial channel or invisible matrix in which flow happens, but merely the particularities or forms of definiteness (or predictability). These are the kinetosyntax of change, whereas the purpose of change is its practice (or kinetopragmatics) and its sense or meaning is the sentient activity suggested by it (or kinetosemantics). C. Against Substance & Foundation. The core insight underlying the philosophy of process is absence of inherent existence. Only this radical negation of substance or essence makes it possible to consistently think movement and transformation, in short change and impermanence. This cannot be thoroughly realized as long as some inherent object or subject prevails. If substance goes, so does a self-sufficient ground. The difference between ground-level, object-level and meta-level can be maintained, but the ground-level is not a permanent, inherently existing seat made firm ! Instead of trying to find an underlying reality, process thought focuses on the momentum, architecture and sense of the flow of actual occasions. As the links of interdependence expand throughout the entire universe and this all the time, in the totality of interdependence or in the world as it is, phenomena are mutually interpenetrating. Taking the world of actual occasions as the only possible world, the absolute nature of phenomena is not sought behind or outside it. The transcendent is a property of the ongoing flow of actualities in just the same way as the immanent is. § 1 The Definition of Substance. α. Substance ("substantia" or "standing under") is the permanent, unchanging, eternal underlying core or essence of every possible thing, a self subsisting own-nature or self-nature ("svabhâva") existing from its own side, never an attribute of or in relation with any other thing. Hence, a substance solely exists by the necessity of its own nature and intrinsic identity ("svalaksana"). Its action is determined by itself alone. Traditionally, it is the principal category of "what it is" (cf. "ousia"). For Spinoza, there was only one substance, namely Nature or God. This substance had infinite attributes, of which each expresses for itself an eternal and infinite essentiality (Ethics, Part I, definition VI). β. If a substance would be determined by something external to itself, then it would be not inevitable, compelled & necessary, but rather constrained. A substance is always Pharaonic. Without the presence of an absolutely free & omnipotent Caesar, the bond uniting things seems to be lost. Without substance, the properties of objects seem not be carried or inhere. But things are just be a dynamical flow with a certain kind of movement (momentum), shape (architecture) and intent (sense). And this the substantivists wrongly deem not to be enough for science, philosophy, ethics, economy & politics ... Substance is always linked with the idea of some thing existing on its own, by itself alone. Although objects can be isolated in a relative sense, they are never so in an absolute way. This means there is no self-identical core remaining untouched by change. But absence of substance is not absence of order. Order is possible because processes are not random and they are not so because movement can have coordination, structure, style etc. These kinetographic features are overlooked and identified as the vestiges of essential, non-accidental properties or essences. This is were the substantialist error creeps in. Logically, this difference is given with the distinction between the actualizing and the existentializing quantor. : "there exists" : affirming object x momentarily exists ; The actualizing quantor confirms x, or  x, the mere existence of x. A set of predicates attributed to object x is present to the senses or the mind. This presence is spatio-temporarily defined, and hence impermanent, i.e. featuring arising, abiding and ceasing. Merely existing object x arises when its presence is identified or registered by a subject or subjects of experience. It abides as long as this actuality, in all cases limited by space & time, continues. It ceases when it can not longer be apprehended or pointed at. : "there is" : affirming persistent existence of x ; The existentializing quantor confirms x inherently exists. A set of predicates attributed to object x is present to the senses and/or the mind, but these predicates are merely accidents of the substantial self-identical core of x, a universal of sorts x, hence x x. With x, the substantial or essential nature of x (or xs) is confirmed. If this xs = x changes, then x is not longer x, in other words, x can no longer be identified as such. § 2 The Münchausen Trilemma. α. The problems of foundational thinking are summarized by Albert's Münchhausen Trilemma. Its logic proves how every possible kind of foundational strategy is necessarily flawed. The trilemma was named after the Baron von Münchhausen, who tried to get himself out of a swamp by pulling his own hair ! An apt metaphor to indicate the futility of trying to find an permanent underlying base, i.e. satisfying the conditions of the postulate of foundation. The latter states valid knowledge must in all cases be absolutely justified, in other words backed by a self-sufficient ground existing from its own side, inherently. β. Every time statement A accommodates the postulate of foundation by way of an absolute justification, three equally unacceptable situations occur. Such an absolute justification of the propositional form P of A implies a deductive chain C of correct arguments C', C", C''' ... with P as necessary final inference. How extended must C be in order to justify P in this way ? Three "solutions" prevail : (a) a regressus ad infinitum : There is no end to the justification, and so no foundation is found (C', C", C''' ... does not lead to P). The whole process of finding a last ground (needed to back justification) is undermined. A point at infinity is however not a problem per se. But it becomes one each time a final ground is needed. Then a regression disproves the logical attempt to articulate a foundation. (b) a petitio principii : The end P is implied by the beginning, for P is part of the deductive chain C. Circularity is a valid deduction but no justification of P, hence no absolute foundation is found. (c) an abrogation ad hoc : Justification is ended ad hoc, the postulate of justification is actually abrogated, and the unjustified ground (C' or C" or C''' ...) is emotionally accepted as certain because, seeming certain, it is deemed not to need more justification. This is of course unproven. γ. The Münchhausen-trilemma must be avoided by stopping to seek an inherently existing absolute, self-sufficient ground for the possibility of knowledge and/or the cognitive act. This happens when one accepts critical science & metaphysics are terministic, i.e. fallibilistic and not eternalizing (nor nihilistic). But although the categorial system cannot be absolute, some of its general features (as given by normative philosophy) are necessary in a normative way (for we use them each time we think). Backing arguments to establish a certain conclusion is not the same as trying to find an absolute warrant. Logical inference can be absolute, but not absolutely absolute. Once the logical system (basic axioms, operators, truth-tables and rules of inference) has been established and accepted among all involved sign-interpreters, an absolute conclusion on a relative basis can in certain cases indeed be drawn, but not an absolute conclusion on an absolute base. Change the basic axioms (like identity, non-contradiction or excluded third) and what is certain in logical system A might not be in system B, etc. This is often forgotten. Classical formal logic is not self-evident. Just as in Euclidian geometry, changing a single axiom may introduce important variations. What at first seems impossible (like intersecting parallel lines), in the end exists both mathematically (as a mathematical object) and physically (as curvatures of spacetime). § 3 Avoiding Dogmatism & Scepticism. α. To avoid dogmatism is not to eternalize a position. No ad hoc abrogation is allowed. If a circular reasoning or a regressus ensues, then one must accept an absolute justification cannot be given and the aim of dogmatism (namely finding such an absolute ground existing in and by itself) is futile and so trivial. β. To avoid scepticism is not to eternalize a contra-position. When a hidden agenda is present, scepticism it but a form of dogmatism in disguise. To criticize is to draw clear distinction. To be sceptical is to overuse negation. At best, it is a dialectical move needed to outwit a dogmatic opponent, but cannot deliver a constructive tale about existence, nor give us any important answers. It is a wayfaring strategy, not a stable station. γ. The critic walks the Middle Way and has no affirmation or negation to defend a priori. Here only distinctions matter. They allow categories to emerge and organizations to unfold. These architectures or forms of information are always changing (have material momentum) and display intelligent design or conscious activity. The extremes of eternalism (accepting the substantial nature of objects) and nihilism (rejecting the existence of anything regular) are examples of respectively a dogmatic and a sceptic position. The eternalist stops the justification ad hoc, and posits an absolute justification on the basis of relative steps. The latter only lead to a relative justification. The leap made is logically invalid. Many strong relative reasons do not constitute an absolute base. Even a majority can err. So if an absolute justification is needed, then a self-sufficient ground must be found. The eternalist has negated too little. The nihilist accepts there is nothing substantial anywhere. But this does not lead to the kinetography of process and so a forteriori lacks the perfection of process. This sceptic has lost grip on all things because this conceptual apprehension of emptiness as lack of inherent existence, although correctly understood insofar as the negation of substantial instantiation is concerned, does not lead to the view of dependent-arising. Process as a dependent-arising is more than merely a stochastic display with no inherent existence, it is a spectacular magical show with, besides momentum (matter), also architecture (information) & sense (consciousness, sentience). The nihilist has negated too much. Distinguish between, on the one hand, the yogi of wisdom ("jñânayogin") and, on the other hand the sophist (sceptic), merely criticizing & arguing without speaking up for anything, and the dogmatist, who argues without staking his own view depend on the outcome of the debate. Dwelling in extremes is to be avoided. Things are not inherently something (x), nor are they nothing (¬ x). They are a something manifesting properties (x) in the isthmus between inherent being and void nonbeing. Existence covers the middle ground. D. Conventional Appearance. Ontology addresses the two epistemic isolates in existence : the conventional properties of any object x and its ultimate characteristics. These are called "epistemic isolates" because to identify them a special & crucial differentiating cognitive act is necessary, namely one clearly identifying what is merely given (to the sense and the mind), the appearance of x, and one sharply establishing (realizing) the process-nature of x, in other words, x's lack of inherent existence. These two "natures", the conventional and the ultimate, are merely properties of x. The ultimate nature is not deemed "another" reality standing beyond, next to or within x. Like in the case of the Golden Lion, the gold and its shape are simultaneous. The first isolate  is the conventional reality or conventional truth about x, the second its ultimate reality or absolute truth. Because the ultimate exists conventionally, there being no "ultimate" ontological plane or level, let us first analyse x conventionality. We already listed the objects of ontology, answering the question What is there ? We found absolutely nonexistent objects, fictional objects, relatively existent objects and absolutely existent objects (cf. supra). To draw the line between what is there and what is truly there will shed light on conventionality and its illusionary appearances. To add "truly" merely points to the possibility something might appear to be there while it is not. Object might appear and independent (inherently existing) & separate (isolated from other objects), while in truth they are not. Like optical illusions, this epistemological illusion (to be identified as an ontological illusion), can be grasped by conceptual reason but remains as long as this mode of cognition endures. Only nondual cognition takes it finally out. Then full-emptiness is (directly) prehended, namely "finding" the absence of inherent existence in all objects simultaneously with their universal interdependence and interpenetration, the union of bliss & emptiness. These considerations bring about the issue of universal illusion and the way this blends in with the valid conventional knowledge of science & immanent metaphysics. This is deemed valid, for producing functional knowledge, but mistaken, for appearing as substantial while this is found not to be the case. § 1 What is Truly There ? α. This question seeks the truth-value of objects, whatever their ontological status as absolutely nonexistent objects, fictional objects, relatively existent objects and absolutely existent objects. This is measured in terms of validity and the presence of a mistake. α.1 An object is valid when it can be identified, apprehended or grasped by a subject of cognition acting as object-possessor (note "prehension" is a special form of apprehension in that the subject cognizes in the nondual mode of cognition). An object is mistaken when it appears differently as it truly is, i.e. when it is incorrectly apprehended or misleading. α.2 Validity refers to the presence of objects. Hence, valid or invalid objects may be mistaken or not. Indeed, valid objects (such as those of science), may nevertheless be appearing differently as they truly are. In fact, all fictional and conventional objects veil their true, absolute, fundamental nature or suchness ("tathata") by the illusion of own-form or self-nature ("svabhâva"). β. Absolutely nonexistent objects are invalid and mistaken. They are invalid because nothing can be identified to correspond to them, not even logically. Hence, as logic precedes function, they have no functionality whatsoever. Although we understand the words "square" and "circle", the combination, i.e. a square circle is nonsensical. They are mistaken because they appear to be something they cannot possibly be. Indeed, although it seems the phrase "a triangle with four angles" conveys some information, namely the presence of an object with three angles which has four angles, it is impossible to apprehend or imagine such a object at all. The phrase is therefore merely a string of black pixels on a white surface. γ. Fictional, relatively nonexistent objects, are valid and mistaken. They are valid because, insofar as they are public, one can point to them. Because they move us, they are functional. But insofar as they are private, the act of apprehension is private too and so only valid for a single subject of experience (reality-for-me or the first person perspective). Fictional objects are mistaken because they represent something which is not as it truly is and this in a definite degree, i.e. by conscious deception. Conventional objects may be valid and mistaken. They are valid because they can be identified as logical and functional realities/idealities. Insofar as this validity is concerned, they are scientific objects. But they are mistaken not because of any conscious deception, but because they appear to possess a nature of their own ("svabhâva", "ousia", "eidos", "hypokeimenon", "substantia"), while they are truly other-powered, i.e. depending on conditions & determinations outside themselves. This is what ultimate analysis seeks to prove (cf. infra). Once this is established, the valid appearance of conventional objects is not changed, but only the mental obscurations or false ideation causing them to be experienced as self-powered has been removed. The elimination of this ontological illusion or substantial instantiation voids their ability to fool us and opens the way to actually see their dependence, universal interconnectedness with other phenomena & exclusively process-based nature. ε. Conventional objects may be invalid and mistaken. Invalid because they cannot be logically and functionally identified, i.e. in no way apprehended by way of logic, argumentation and experimentation. The caloric fluid theory of old, the four humours or the epicycles at work in the Ptolemaic & Copernican models are good examples. These objects of outdated scientific theories have been disproved and so disbanded from the arena of paradigmatic scientific objects. These invalid conventional objects are also mistaken, for regardless of the fact they no longer function, they -just as valid conventional objects- posit characteristics existing from their own side. ζ. Finally, among existing objects there are those which are beyond validation and not mistaken. They are beyond validation because they refer to something every subject of experience can potentially identify in every sensate or mental object but never name and correct because they appear as they are, i.e. do not conceal their truth. These ultimate objects are nothing more than conventional objects apprehended without any sense of self-power. They simultaneously reveal (a) absence or lack of independent existence  ("tathata") hand in hand with (b) dependent-arising ("pratîtya-samutpâda") or universal interconnectedness (interdependence & interpenetration). The objects prehended by the wisdom-mind of a Buddha are all of this category. η. Nonexistent & fictional objects are not the first aim of ultimate analysis. Nonexistent objects are not because their ontological and epistemic status is irrelevant to the question at hand. Fictional objects are not because their deceptive nature is apparent and so unconcealed. Conventional objects are the prime target of ultimate analysis, for the fact their true nature is veiled is not apparent. Quite on the contrary, to the mind of Homo normalis, they are self-evidently existing extra-mentally and substantially, i.e. from their own side. Their accidents (qualities, quantities, modalities & relations) are deemed to adhere to their own essences, and this inherent existence is self-powered, i.e. isolated from conditions & determinations outside themselves. If these objects really exist the way they appear to the deluded mind, then it should be possible to separate the quantities, qualities, modalities and relations entertained by these objects from their supposed substantial core or essence ("svabhâva"). What remains after we remove all the accidents from an object ? Objects can be logically identified and do have functional effects. These can be found. But ultimate logic seeks to prove no objects exists in accordance with our common ideas about them, i.e. such own-form cannot be found at all. Remove its accidents, and the object as a whole vanishes ! Remove the (logical & functional) properties, and the instantiation of the concept given by the copula "is" is out. Nothing remains. θ. Both natural and artificial conventional objects are deemed to possess characteristics independent of their observers. Indeed, we suppose these objects exist even if they are left unobserved. And of course, on the meso-level of reality, they do exist in a logical and functional way. But not substantially, i.e. without being subject to change. Indeed, the pivotal feature ultimate analysis seeks to disprove is the substantial, inherent permanency of conventional objects. So in terms of ultimate analysis, the fact these objects are found to be independent of conscious observers is not problematic per se, but the notion this independence is somehow an inherent feature of these objects is. Hence, inherent existence is the proper object of negation, i.e. the core feature of objects ultimate analysis disproves. The duality between objects & subjects is not a target, for suchness is directly apprehended by a nondual, non-conceptual, awakened mind. What is truly there ? After having identified what exists, one divides the lot in valid & invalid, unmistaken & mistaken, ultimate truth & conventional truth. conventional truth valid & mistaken invalid & mistaken ultimate or absolute truth beyond validation & unmistaken A valid object works efficiently. A consensus about the theory abstracting the outcome of experiments with the object is present. Facts concerning it are repeatedly confirmed. This tenacity of subjectivity & objectivity makes object x appear as independent & separated from other object y. But is this the case ? The world-ground cannot be found as a fixed, solid, inherently existing object. If so, valid objects are mistaken because the appear differently than how they truly are. An invalid object does not work efficiently. It either lacks the logical conditions for efficiency or does not actually operate efficiently. Acquiring the conditions for efficiency is giving logic to the architecture of process. This is applying form, rule, code, algorhythms, notion, idea, concept, theory, paradigm, etc. When these conditions are fulfilled -in order for the process to operate efficiently- semantic organicism must be present. Objects with style may lack overall order, i.e. a given organization of the meaningful features of their process. While (unconsciously) instantiating it, conventional understanding can be neutral as to accept substantiality. The conventional mind may ignore the idea of substance and continue to function. But although this grasping at a substantial "self" is indeed acquired, it is also innate. The latter reflects the ongoing -unconscious- activity of the ante-rational modes of cognition, the mythical, pre-rational & proto-rational mentalities of the mind. In this modes of cognition, substantial instantiation was the "natural" way to stabilize the pre-concept & the concrete concept. In the course of the development of the human mind, this reifying tendency was so basic & strong, it even leaped into reason, deceiving formal cognition with concept-realism and its substantialist ontological prejudice & semantic adualism. For applied epistemology (the highest abstract mode of studying & reflecting upon the production of knowledge), methodological realism (at the side of the object of production based on experimentation) and methodological idealism (at the side of the intersubjective community of involved sign-interpreters communicating with each other) are maxims without which no valid conventional knowledge can be produced. This proves conventional knowledge may well theoretically, in a transcendental inquiry, "purify" itself and attain critical understanding, practically it cannot purge the pragmatical substance-obsessions of researchers & thinkers. In the critical mode of cognition, truth, beauty & goodness are no longer ontologized. Although the object continues to appear differently than it should, it can no longer deceive us and so the so-called "safe house" of self-powered substance cannot be rebuild. Only absolutely true objects are unmistaken. They appear as they truly are. There is no deception anywhere. They are the truth of their existence. Ultimate truth and absolute reality/ideality are identical ("dharmakâya"). They are therefore unmistaken. These absolute objects are beyond validation because absolute objects perfectly work but this activity is nameless. The architecture of their process is a holomovement. To alter the world in terms of unity & harmony, they manifest propensity-fields of form ("rûpakâya"). § 2 Concepts, Determinations & Conditions. α. The "object" of the "natural standpoint" of conventional knowledge dictates (a) a reality "out there", existing independently (extra-mentally) and with a solidity from its own side and (b) an ideality "in here", likewise substantially established. The physical body is the first of these natural objects. Although part of the "subject" it nevertheless behaves in the same "objective" way as do outer objects. Moreover, objects "out there" seem even more to escape conscious manipulation, and so manifest tenacity, permanence, solidity and an unchanging character. These sensate & mental objects appearing in the "natural" world are problematic. β. Concept-realism is a way to consolidate the substantialist view on conventional knowledge. Concepts represent reality and/or ideality in a one-to-one relationship. However, general concepts or universals cannot be established on the basis of induction. The concept is a generalization on the basis of a finite number of elements used in the induction. Hence an unjustified logical jump from the singular to the general occurs. But in conventional knowledge, especially in valid non-scientific contexts, this happens all the time. Falsificationism has avoided this logical problem, but remains bound to a realism allowing "outer" objects to impact our senses. γ. Determinations are lawful connections between actual occasions. Conditions are assumptions on which rests the validity or effect of something else. All conventional objects depend on determinations & conditions. They are solely powered by these. Actual occasions & events are linked if the conditions defining the category of determination are fulfilled. For example, in the case of causation, it is necessary, in order for an effect to occur, to have an efficient cause and a physical substrate (to propagate it). In general determinism, these determinations are not absolutely certain, but relatively probable. Science is terministic, not deterministic. If individual action and (as an extension) civilization is considered, events are also connected by way of conscious intention, escaping the conditions of the categories of determination. Indeed, without "freedom", or the possibility to posit nondetermined events, ethics is reduced to physics and free will impossible. How is responsible action possible without the actual exercise of free will, i.e. the ability to accept or reject a course of action, thereby creating an "uncaused" cause or influencing agent, changing all co-functional interdependent determinations or interactions ? Even if it remains open whether the will is free or not, morally, we must act as if it is. ε. Scientists are cognitive actors producing valid but mistaken conventional object-knowledge by way of corroborated empirico-formal propositions and theories. This is information triggering correspondence (with facts) & consensus (between all involved sign-interpreters). Everyday observation also involves experimentation & (inter) subjective naming, but, in the language-game of true knowing, a more solid, inert and tenacious objectification is at hand. Here, a series of more lasting connections between direct observable events is made, and categories of determination are put forward to organize these connections. The following irreducible types of lawfulness ensue : • causality : effect by efficient, external cause (example : a ball kicking another ball or Cartesian physics) ; • interaction : reciprocal causation or functional interdependence (example : the force of gravity in Newtonian physics) ; • statistical determination : end result by the joint activity of independent objects (example : the long-run frequency of throwing two aces in succession is 1/36, the position or momentum of a particle, enduring correlation between two variables) ; • teleological determination : of means by the ends (example : standardization, final determination of actual occasions) ; • holistic determination : of parts by the whole (example : needs of an organ determined by the organism, impact of the electro-magnetic field on the objects within it). That conventional objects have no analytically findable self-nature or substantial own-form existing from their own side, does not mean they are nonexistent, possessing nothing. They do not however possess themselves, but are the result of other-powers acting upon them, enacting the laws of togetherness, thrusting creative advance, performing the power & beauty of the symphony of interdependent & interpenetrative arising, making these objects arise, abide, cease & reemerge. They do no exist as substances, nor do they exist as nothingness, as stochastic voids. Things have no shred of substantial existence from their own side, but are part of interdependences. These involve (a) actual occasions depending upon each other in a determinate way, neither existing without the other, (b) subjects of experience & objects of experience conditioning one another. These types of interdependence (determinations & conditions) make it clear conventional objects are functional and so highly unlikely events, in no way the outcome of randomness & coincidence. Conventional reality is in itself a well-formed & functional totality, evidencing unity & harmony. Because it is the actual mayavic scene of illusion, suffering is pervasive. Not because of its nature as it is, but because of the obscurations & afflictions caused by ignorance of the true nature of phenomena, namely dwelling in the extremes of affirmation (acceptance) & negation (denial) and their conceptual elaborations : exaggerated desire or craving (pathogenic obsession) & hatred (pathogenic rejection). § 3 Valid but Mistaken Appearance. α. Valid conventional knowledge holds a justified view on conventional reality (a sense of the objective "outer" world) and on conventional ideality (a sense of subjective, "inner" selfhood). Organizing this valid scientific knowledge in terms of a paradigm covering the totality of conventional sensate & mental objects is the task of science aided by immanent metaphysics. This implies all possible logical & functional instantiations, i.e. empirico-formal propositions of fact (science) and arguable speculations about the totality of the world (immanent metaphysics). β. Validity implies logical well-formedness and the regulations of correspondence & consensus. This means a problem can be solved and/or a certain operation can be executed. In theoretical format, logic & functionality are transcendental (not transcendent !) and so represent the ideal of the norm. This ideal is not substantially given, but a set of rules (or information). β.1 Theoretically, the consistency of epistemology depends on the necessity of accepting that facts, besides intra-mental, are also extra-mental. When this normative set principles & norms is actually applied (as in applied epistemology), logic & functionality incorporate the "as if" mentality of methodological realism & methodological idealism. β.2 Epistemology & science make use of substantial instantiation, causing the whole domain of valid conventional knowledge, insofar as the fundamental truth or nature of phenomena is concerned, to be mistaken, for truth-concealing. γ. Sensate & mental objects possessed by the conventional knower are impermanent and so constantly changing. This change is not random. It has order (information), momentum (matter) & sense (consciousness). γ.1 But to the conventional mind, operating in the first six modes of cognition, these objects in all cases appear as existing independently of other objects and isolated (separated) from them. γ.2 In the physical domain, there is the Einstein-limit of locality imposed by relativity : material signals cannot travel at speeds higher than the photon, a particle without mass speeding at 300.000 km/s and its own antiparticle. A single photon is deemed to exist independent of the mind and separate from other photons. This limit defines the parameters of what is considered "physical". γ.3 In the domain of information, the binary code organizes all possible software. The "0" and "1" of this system are deemed to exist as independent abstract objects in "mathematical space". Their various manipulations & algorhythms (poetically named "architectures") are independent from the electro-magnetic impulses with which they are joined and which they organize. γ.4 Sentient beings cognize by way of object & subject. Both can be reified and then appear as independent & separate entities. The concealment of the true nature of things, namely their impermanence and non-substantiality makes valid & invalid conventional knowledge mistaken. By making sensate & mental objects appear as existing from their own side, a difference is introduced between how things ultimately are and how they appear to a mistaken mind. This means the difference causing ignorance is epistemic and not ontological. δ.1 Insofar as ultimate truth goes, there is only a single world-system as it is in its two aspects of actual world and virtual world-ground. Because all phenomena are at all time mutually interpenetrating & interdependent, they are fundamentally identical (i.e. lacking self-power). Can we say the total world (past, present & future) rises simultaneously ? δ.2 The mistaken appearance of conventional objects due to the mentioned false ideation causes the world to appear differently than it actually is. This false appearance is the root-cause of all possible mental obscurations. Clear this, and the complete, pure and luminous totality emerging from infinity dawns, the union of compassion & wisdom, of a view efficiently dealing with conventionalities while realizing their process-based nature.. Again : the Sun seems to rise in the East and set in the West. But this diurnal movement is actually the Earth rotating on its axis.  Likewise, the Sun seems to rotate around the Earth. Actually, the ecliptic is the path of the Earth around the Sun. Despite the Lunar disk being rather constant, a Harvest Moon seems huge. Understanding the astronomy & the physics behind these illusionary phenomena does not take away the illusion. Likewise, conceptually grasping the limitations of the conceptual mind does not make the illusion caused by substantial instantiation vanish. But we are no longer fooled and merely grasp at the impermanence of it all. So succinctly put, the conventional mind operating conventional knowledge about conventional objects is valid or invalid, but in all cases mistaken. Not because things do not work. Valid objects work. Not because things are merely nonexistent, for some work. Merely because conventional reality does not appear as it is. That is all there is to it. Projecting substance, it is merely process. Positing solidity, it is merely space. Presuming self-powered, self-settled self-nature, only otherness is truly found. § 4 Appearance, Illusion & the Universal Illusion. α. Universal illusion cannot be identified, for positing "mâyâ" turns it into something particular, contradicting its universality. Neither can we exclude universal illusion by assuming "existence" equals "being known in thought". We assume the mental coincides (represents) the extra-mental and move from this assumption to the affirmation this must be the case. This is illogical. Transcendence can only be approached with a non-affirmative negation. Posit nothing. Classical metaphysics is prone to this category mistake (assumptions are not certainties). Metaphysical realism (mind corresponds with reality) and metaphysical idealism (mind makes reality) are extremes to avoid. β. The argument of illusion has objective & subjective terms : • objective : logical & neurological arguments prevail. Because sensate & mental objects appear as independent & isolated and they are not, all conventional objects are illusions, i.e. things appearing differently than they truly are, as it were concealing their true process-nature underneath the mask of substantiality. This by force of the logic of the definition of illusion. No subject of experience ever faces the totality of changes caused, so we must assume, by particles, fields & forces acting as a constant stream of stimuli on the surface of the receptor organs. Only after a series of complex alterations (transduction, relays & integration) is the neocortex -via the thalamus- informed (after projection on the primary sensory area), about the perceived states, events, occurrences & objects. But, this thalamic projection into the neocortex, in accord with the language of the cerebrum, is not yet sensation. This it only becomes after the afferent pathways enter the verbal association area, immediately connecting them with the attention association area (while the primary sensory area has few connections with the prefrontal lobes !). Our sensations, because of their irreducible and pertinent interpretative, constructive, conceptual, personal nature, could be a kind of fata morgana or mirage, composed of distorted sensory items. Ambiguity is the least one can say of the direct observation of sensate objects. Descartes was right, our senses are unreliable to inform us about the world at large ; they process a very narrow band of available possibilities. • subjective :  the most objectifying operator of consciousness, namely cognition or mind, works in various modes. In the ante-rational mode, sensate objects appear in contexts and have no meaning outside these. In conceptual thought, which is formal, critical & creative, the theoretical connotations grasped by the subject of experience make it impossible to witness sensate objects devoid of interpretation. Even if so-called "subjective factors" are reduced or eliminated, it cannot be conceptually known whether a collective mirage is at hand or not. γ. Universal illusion ("mâyâ") is the result of superimposing a false view on the world-system. It is called "universal" because it touches all possible sensate & mental objects. It is an "illusion" because this is like obscuring what is at hand with something not at hand. γ.1 If no object of knowledge can be found able to resist the ultimate analysis proving its lack of substance, then the appearance of independent & separate permanence is problematic. If all objects lack existence from their own side, self-settled, then no object should appear as such. If all do, one must conclude all conventional thinking, although valid logically & functionally, is bewitched, i.e. as it were "under the spell of Mâra", destroying the wisdom realizing emptiness leading to mental obscurations and afflictive emotions. γ.2 This explains why only great compassion, skilfully exploiting dependent-arising, is able to prepare the mind to sober up and break through all possible substantial instantiations, prehending the world-system only in terms of the existential instantiation (cf. infra). Is this universal illusion the price we pay for coupling our sentience with biological systems like the Hominidae ? Is this "the Fall" ? Then salvation is like merely recognizing the nature of mind. We are no longer naked, but may choose to take off our clothes at any moment ... Mental objects may last but are not permanent. Mindstreams at least last a lifespan, if not longer ... Sensate objects, produced by perceptions and interpretations, are also impermanent. Some very much so, while other enjoy a long abiding. But eventually, they too will cease. To this uncertainty is added the illusionary nature of these objects, for they appear as if being "out there" and "self-powered", but are in fact devoid of any trace of findable own-nature Like in a dream, things are not what they seem. Consider the consistency of the dream itself, especially its solid physics. As soon as gravity comes into play, the conventional mind as it were automatically reifies its objects. This is nearly a reflex. We are drawn back to "believe" a wall is a solid object "out there". We are sure this can be found to be the case. Common sense is based on these hallucinated assumptions. Take out gravity, and the deeper microlevel comes into perspective. Objects flash in and out of existence, and their properties depend on how they are being observed. They are also dependent & non-local (universally entangled). Likewise, on the macrolevel, conventional objects moving very fast experience the dilatation of space & time. How can these properties be reconciled with the conventional objects of common sense ? Clearly, the question about the ultimate truth of phenomena comes first. E. Ultimate Suchness/Thatness. The ultimate nature of all possible phenomena can be proven, expressed and experienced. The proof purifies the conceptual mind to let go of reification (the substantial instantiation). The expressions of ultimate truth are non-conceptual, poetical. Its experience direct. This calls for (a) conceptualization without reification, cutting the discriminating mind and (b) the direct prehension of the ultimate nature of all things. As a continuous symmetry-transformation, this awakened continuum of pure radiant awareness, empty of intrinsic existence never ceases. It gives rise to the "special" apprehension or prehension of a pure mindstream experiencing the absolute truth continuously. From the side of this enlightened or awakened mindstream, nothing but the absolute truth prevails ("dharmakâya"), but insofar as this Clear Light* bodhi-being aids others, it assumes bodies of form ("rûpakâya") manifesting great compassion ("mahâkarunâ"). The body of truth represents the Suchness ("tathatâ"), the transcendence of the absolute, the ultimate. The bodies of form are its Thatness ("tattva"), its immanence or being right "there" as reliable before us. In an unmistaken mind, these two continuously happen together. § 1 The Katapathic View on the Ultimate. α. In the positive approach of the absolute, it is deemed possible to describe the ultimate (both as reality and truth), to conceptually identify its properties and to convey this to others by means of "holy worlds". α.1 The hieroglyphic script is a monumental example of the principle. Here the glyphs themselves possessed operative power ("heka" or magic). In the monotheisms of "the book", God inspired His prophets to write down what He wants for us (as in the case of the Bible) or He made His tale directly descend (as with the Koran). In the East, this positive tale is found in the descend ("avatâra") of the Gods themselves, incarnating as gurus embodying cosmic consciousness ... Alas, nothing of this endured ! α.2 The katapatic approach has an absolutist conceptual framework to offer, one in which the absolute -as God, Gods or Goddesses- becomes the supreme reified object. Such a framework is possible but invalid. β. Insofar as the katapathic view goes, conceptual knowledge should at least be able to convey a conceptual message from the Divine. But this can only happen if our natural languages somehow "connect" with the Divine by force of an onto-semantic aduality supposedly to inherently exist between the absolute and human language. β.1 As ultimate analysis, by evidencing how all conventionalities (like languages & concepts) are relative and impermanent, proves the absence of such an onto-substantial aduality, the katapathic view cannot be properly argued. No "natural bridge" between concepts and the absolute can be found. β.2 Is this adherence to one then a wrong view fed by emotional familiarization & faith ? The more religion reifies, the more violent the confrontations with other-believers may be. Insofar as such exercises of faith are viewed as anthropological data, these blind beliefs deserve respect, but in terms of the longing for wisdom, they are worthless. γ. The importance of conceptual preparation must be clear. To purify the conceptual mind, reification must end. Then, by way of existential instantiation, concepts are merely logical & functional. Per definition, the conceptual mind cannot touch the absolute, prehended by non-conceptual nonduality only. But if the conceptual mind remains tainted by gross, subtle & very subtle obscurations (substantial instantiations), then, per definition, such prehensions are also impossible. So one needs both the purified conceptual mind and nondual prehension. The first formal thinkers believed concepts represented the absolute. The illusion of permanence, objects at their face value, was identified. Substantial objects & subjects emerged, hindering the production of novelty and the élan of creative advance. After two millennia of vainly seeking stability, the Copernican Revolution brought about the understanding conceptual reason cannot find any self-powered object at all. Concepts are convincing overlays, suitable fabrications & potent hallucinations. Ergo, the concept of the Divine as a "substance of substances" is an anachronism. The tale of the Divine is necessarily merely the way of the sublime poet and his fleeting, transient and rhapsodic conceptualizations devoid of self-settled powers. § 2 The Apophatic View on the Ultimate. α. In the apophatic view, there is no Divine tale to give. Language and its concepts never suffice to convey anything concerning the absolute. Only direct, nondual experience is of any use here. Conceptual preparation is accepted, of course, but it is never the cause of awakening, for the latter is beyond any possible affirmation, denial or combination of both. To give credence to any Divine tale beyond its playful poetical value is unreasonable and so rejected. β. To enter the mind of Clear Light*, a clear-crisp conceptual mind is the necessary condition of "purity". Such a mind no longer substantially instantiates its objects. But to "see" emptiness this does not suffice. Nondual cognitive prehensions must "cap" the activities of this purified conceptual mind, allowing the awakened mind to profoundly rest in its existential instantiations, continuously enjoying the manifestation of the union of wisdom & compassion, of formless & form. γ. Transcendent metaphysics is possible. But these speculations do not articulate valid metaphysical statements. Only immanent metaphysics is able to claim any validity in the rational sense of the word, i.e. as part of an argument. γ.1 Transcendent metaphysics is "valid" in the sense it too works in terms of object & subject, albeit in an absolute extension. γ.2 Not all poetry is the same or of the same artistic value. So as a criteriology dealing with the hermeneutics of poetry, transcendent metaphysics may have a future. Un-saying does not mean nothing can be said. It merely points out concepts, words & languages do not suffice in describing the mystical experience, the unveiling of the concealed, the recognition of existence as it is and just that. This is ineffability, like the smell of a rose, wordless. The conceptual mind cannot grasp the denotative sense of what mystics experience directly, i.e. the nondual, non-conceptual inseparability of bliss & emptiness of the mind of Clear Light*. The apophatics do speak about their experiences, but only in a connotative sense, stressing no logical acceptance or denial are able to describe this nondual state beyond all possible affirmation & negation. But if something in addition to what is explicit is implied or suggested, then the Clear Light* has all possible Divine qualities, it is eternal, unchanging, unborn, etc. The danger for a relapse into katapathic theology or buddhology is real here. The mind of devotion has a tendency to invent too much metaphysical compliments. Absence of denotation means a science or metaphysics of the actual station-of-no-station of ultimate enlightenment is impossible. But although no positive, denotative & significant sense can be established, awakening can and must be the object of poetical licence. A hermeneutics of mystical language and a transcendent metaphysics of awakening are therefore not out of the question, nor is a scientific preparation of bodhi-mind. But to conceptually catch non-conceptuality is impossible. § 3 The Non-Affirmative Negation. α. An affirmative negation negates A and by doing so affirms B (when negating "day", "night" is affirmed). A non-affirmative negation negates A and affirms nothing else. When the set of all properties of A are negated, the object itself vanishes. This vanishing is not an instance of nondual cognition (a prehension), but -when carried through on all sensate & mental concepts- the end of the purification of the conceptual mind. This pure conceptual mind is the precondition of prehending emptiness, the true nature of all objects of cognition, but not the cause of such an unmistaken mind. β. The object of negation, or what is to be negated, is not the subject or the object of cognition, nor it is the duality at work between these. Neither is it the absence of these, the union of these or any combination of these. What needs to be exhaustively & non-affirmatively negated in order to condition the mindstream to realize ultimate truth, is the reification of any thing or ¬ x. Call the mental operation actually doing this "zero-ing". γ. Zero-ing purifies the conceptual mind, making it step by step suppler and more transparent. Then, at some point, this allows the mind to undo itself of its reified concepts & substantivist conceptual elaborations, as it were piercing through the generic image it made of all emptinesses, purging itself from the last remnant of very subtle reification. At some point, the fabricated approximation appearing less dense after each and every negation, is gone and the world as it is is prehended. Purifying the conceptual mind is arresting substantial instantiation and eliminate the cause of these instantiations. A calm mind is necessary. This is a concentrated & compassionate mind. Meditative equipoise is perfect concentration on any object of the mind. When this is done with coarse objects, the practice extends to subtle & very subtle objects. Then the mind takes the emptiness of any object as its object of concentration. When able to analytically investigate emptiness and stay perfectly calm (with a sole focus on the emptiness of all possible objects), special insight dawns. This new ability needs then to be trained. Eventually, a totalizing generic image of all possible emptinesses is reached. When the emptiness of this generic image is clearly realized, the reification of concepts has come to an end. The last concept, the emptiness of the generic idea of emptiness, is non-affirmatively negated. With the elimination of all acquired substantivism, innate self-grasping can be addressed. This refers to the obscurations present in the ante-rational modes of cognition. When these too have been reversed, the complete continuum of the conventional mind is finally purified and awakening (the realization of bodhi-mind) may manifest. § 4 Fabricating the Ultimate : Ending Reified Concepts. α. First ultimate logic needs to be understood. After many decades of daily work, this can be done by conceptually grasping the instantiations step by step. Applying them by using various inner & outer objects, brings about a generic idea of emptiness. It is called "generic" because it relates to all members of the set of possible cognitive objects and their emptinesses. It is as if analyzing all the rooms of a house before presenting a synthesising picture of the house. But this mental procedure is still non-meditative and born from the conceptual activity of the apprehending mind. α.1 During unwavering concentration in equipoise tranquility on this generic, totalizing idea and its emptiness, the moment comes the conceptual mind as a whole is purified. The next moment is not yet the direct experience of emptiness, but merely a perfected approximation. When this happens, no coarse & subtle obscurations (discriminations) are left and the mind is fully prepared for the nature of mind to shine through unimpeded. The moment this nondual Clear Light* actually penetrates the purified mind -no longer reifying conceptuality-, the direct experience of non-conceptuality starts. α.2 The actual moment bodhi-mind begins is spontaneous, uncontrived and born out of nothing (not caused). Likewise for all possible prehensions of the nondual, nonconceptual mind. β. As long as emptiness is approached indirectly, the reification of concepts (their substantial instantiation) has not thoroughly ended, and so -at a subtle level- the mind is still impure, tainted, obscured, ignorant. But the generic idea is a ladder, a totalization of all possible conceptualization regarding the emptiness of persons and phenomena. β.1 By taking this idea as the basis of concentration, the reification of all possible concepts can be undone and when this happens on a continuous basis, the process of purification of the conceptual mind has ended. Coarse & subtle obscurations stop and the purification of the very subtle innate reification (born out of ante-rational cognitive activity) begins. β.2 Slowly the opaqueness of the generic idea fades, becoming absolutely transparent. But this transparancy is not the cause of the experience of emptiness. Fully recognizing the mind of Clear Light* is needed. γ. When, after purifying the conceptual mind, emptiness is directly witnessed for the first time, nondual cognition is no longer put on hold and the process of its (non-conceptual) emancipation may begin. This happens by purifying the mind from the process of reification still active in the mythical, pre-rational and proto-rational modes of cognition. The essentializing activity of the conceptual mind (in its formal, critical & creative modes) is acquired. To enter nonduality, the very subtle reification to eliminate is innate. γ.1 Only when the minds associated with the first six modes of cognitive activity have been thoroughly purified by dereifying their objects, is the mind like the purest diamond. Then there results with reference to the "grasper" (the knower), the "grasping" (the knowledge) and the "grasped" (the known), a complete coincidence with that on which consciousness abides & by which it is "anointed". The hexagonal loosens the knots of ignorance, and when then fuel of the fire is gone, the fire goes out. γ.2 This is not awakening yet, but the final purification of the mind as a whole, the stepping-stone to Buddhahood. ∫ A mind lacking compassion may misconstrue the end of conceptual reification (the purification of the conceptual mind) as the first moment of awakening. The purification of the conceptual mind leads to the end of reification. At this point, not a single object is deemed substantial. All is process, i.e. dependent-arising defined by momentum, architecture and sense. This purity can be trained by way of study, reflection and meditation. This is the science of preparations. To understand all logical possibilities and to be able to conceptually grasp absence of inherent existence can be done without meditation, but this does not lead to the end of reification, it is merely a start and may lead to nihilism. Balanced concentration on a single coarse object like a flower is not easy. To realize the meditative equipoise of calm abiding, abstract objects are even more difficult. Successful calm concentration on the emptiness of any object is the next step. Not a coarse object, nor an abstract object are at hand, but their ultimate property, their emptiness. This has to be epistemically isolated. Often, analysis makes calmness leave. Likewise, too calm a mind cannot find the impulse to analyze. So to achieve special insight, coupling calm abiding on emptiness with analysis of emptiness, takes years of long meditative sessions. When this superior seeing is finally realized, the analysis of emptiness enhances tranquil concentration on emptiness. This leads to profound encounters with the absolute property of each and every sensate or mental object of mind. With superior seeing a generic image is construed. Realizing its emptiness is the purification of the conceptual mind, the end of reification. The end of reification is not yet "seeing" emptiness, nor is it awakening. To "see" emptiness the mind of Clear Light* has to be non-conceptually prehended. A purified conceptual mind is therefore a necessary condition but not a sufficient condition. To awaken, the mind as a whole needs to be purified, not only from its acquired obscurations, but also from the innate. What is realized at the end of the purification of the conceptual mind is not a direct experience of emptiness, but the very subtle conceptual realization of emptiness. The mind has indeed been freed of self-cherishing and acquired self-grasping has been eliminated. In itself, this is a very high spiritual achievement, endowing the mindstream with lasting, irreversible qualities. But although lofty, this proximate emptiness is not the same as actually "seeing" emptiness. It is still contrived, and thus planned, manipulated and somehow artificial. It remains conceptual, albeit on a very subtle level. But precisely because it is conceptual, it cannot be said to be a direct, immediate, natural, spontaneous realization. § 5 The Direct Experience of the Unfabricated Ultimate. α. The direct experience of the ultimate is ineffable. It is non-conceptual. One cannot describe the smell of a rose. Pheromones have a vocabulary of their own. Denotative conceptual rendering is impossible. Likewise, the exact nature of any atomic particle before observation is terministic and paradoxical. How to explain "superposition" conceptually ? Only in the language of mathematics can this be done. But one may, no doubt influenced by their smell, compose a poem about the rose. β. To witness the unfabricated nature of emptiness calls for existential instantiations (a pure conceptual mind) and a prehension of the absolute. Duality is constantly carried to a point at infinity and so nonduality is what remains. Nothing positive can be said here. The poetry of the inseparability between directly seeing emptiness (wisdom-mind) and the interconnectness of all events is what is left ... γ. Because duality remains present even at a point at infinity, nondual cognition is bound to experience the conventional and the absolute simultaneously. There is not a single truth, but two truths. Although one of both is unfabricated and the other is contrived, the conventional (the result of collective delusions) are part of the equation. The latter brings in compassion again, for what use is absolute truth if not all sentient beings share in the same direct experience ? Suppose the logical boundaries established by criticism are like the frontiers of the country of conceptual thought, bordered on one side by the non-conceptual mind of Clear Light* and on the other by all pre-conceptual and conceptual modes of cognition. Insofar as philosophers turn away from this demarcation, as Kant did by denying intellectual perception its place, and so never point to what lies beyond the border of conceptuality, their view on emptiness does not even take the Clear Light* into consideration. This would be a "dead" interpretation of emptiness by way of the "dead bones of logic" (Hegel), one limited by conceptual thought and missing the purpose of ultimate analysis : to end reifying concepts by way of concepts, the precondition for looking over the border towards the country of Clear Light*, a country the existence of which, as yogic perceivers show, cannot be denied ! Of course, logically, as Descartes pointed out, the "lumen naturale" or mind of Clear Light* is before any possible conceptualization. For Critical Mâdhyamaka, and correctly so, no logic is able to refute the Middle Way. Nothing about "nirvâna" can be affirmed (all eternalization avoided), and emptiness meditations on the mind itself find no ground to reify any part of its operations. Thus eliminating its substantial instantiation, consuming all possible fuel, extinguishes the fire of reification and makes the mind effortlessly & spontaneously (not causally) arrive at the "other shore" (all nihilism avoided). The mind, besides being known by conventional knowledge as an object of conventional truth, is also known by ultimate knowledge as an object of ultimate truth, i.e. lacking inherent existence. While the mind of Clear Light* is not a substantial part of the objective side of the view, it is introduced by accomplished yogis as a hypothetical subjective fruit each & every sentient being may, with due effort, directly experience. This refers to the presence of an enlightenment potential in all sentient beings. This is not the same as logically affirming Divine qualities inhere in this potential from the start. Instead, they are generated as the result of emptiness-meditations on the mind, turning successful because all sentient beings possess the potential for enlightenment from the start. Affirming the ineffable empty nature of this wisdom-mind does not hinder master yogis to construe the Clear Light* as an interpretative, non-empty object of poetry, praising its inherent qualities, said to endure despite adventitious ignorance & defilement. In fact, the profound yogic experience of Dzogchen & Mahâmudrâ experts confirms this to be the case, and this despite the definitive logic proving conceptual thought cannot penetrate non-conceptual, ultimate truth. However, from the side of logic, these accomplished yogis with their sublime poetry only inspire, uplift and act as a excellent & sublime examples. This has to be made very clear, for the object of this art of the Great Perfection, positing the inseparability of the primordial base (objective "dharmadhâtu" or "khunzi") & the mind's natural clarity (subjective mind of Clear Light* or "rigpa"), has no conceptual ground whatsoever. Within the country of concepts, Nâgârjuna's logic is final ; nothing can be affirmed about ultimate truth ! No logical, conceptual path leads to the beyond of discursive thought, only to its border, and so one is left to develop concepts ending the reification of all concepts. This is however not the end of reifying cognition, at work until the last, tiniest drop of reifying fuel is burnt and beyond ! Let us summarize this in the traditional way (cf. Kamalashîla) : (1) the path of accumulation : the mind is made pliant (compassionate) by generating the mind of awakening for the benefit of all sentient beings ("bodhicitta") and emptiness is conceptually studied, reflected upon and taken as an object of meditation on coarse (outer), subtle (inner) & very subtle (secret) objects. Special insight ensues when calmness & analysis can be combined in such a way they reinforce one another ; (2) the path of preparation : using this special insight or superior seeing, a generic, highly refined conceptual image of the emptiness of both persons & phenomena is realized. A very subtle conceptual generic idea of emptiness results. The conceptual mind (with its formal, critical & creative modes) is completely purified and acquired self-grasping ends. All coarse & subtle obscurations end, but very subtle ignorance remains. An approximation of the direct experience of emptiness is realized ; (3) the path of seeing : emptiness is directly observed for the first time, without the use of concepts, but non-conceptually in the nondual mode of cognition - this is a decisive turning-point, implying genuine transformation of mind  ; (4) the path of meditation : to further stabilize the nondual mind, innate self-grapsing -resulting from the residual activity of the ante-rational mind (with its mythical, pre-rational & proto-rational modes)- is tackled, and so the very subtle obscurations (escaping the purification of the conceptual mind) are gradually totally eliminated ; (5) the path of no-more-learning : the hexagonal mind (with its six modes of cognition, three ante-rational & three conceptual) is totally purified from all possible coarse, subtle and very subtle obscurations, leading directly to complete, irreversible and total awakening, prehending emptines and dependent-arising simultaneously. F. The Ontological Scheme. The heart of ontology is the logic of the ontological principal, the leading idea acting as common ground shared by all possible things, existing, nonexisting or fictional. In the present critical metaphysics of process, this ontological principal is not a substance, but a process. It is not self-powered, self-settled, but other-powered. Perfected, these actual occasions show a continuous kinetography, unchanging architectures of change. But these continua are nevertheless always grafted onto the coordination of movement, of changes in momentum, code & sense. Awakened, a continuous symmetry-transformation or holomovement is at hand (devoid of suffering). Mostly however, the kinetography of change is discontinuous, i.e. a-symmetrical (causing suffering).  Because the ontological principal is a process, it cannot be identified with the substances "matter" or "mind". In fact, the deeper, more profound leading principle is common to both. The ontological scheme is a sketch of the basic concept of this metaphysics of process. This is based upon the most concrete elements at work in our direct experience, close to how things are found ; as a stream of experience constituted by "droplets", "dops", "events" or "moments" of singular, individual experience. These are the final things of which the concrete world is made up. Nothing more can be found behind them. Nothing more real can be found. § 1 Event & Actual Occasion. α. Consider streams of events constituted by singular droplets of happenings acting together. These are interdependent phenomena, each being the outcome of other-powers, namely determinations & conditions other than the event at hand. Actual existence is that what happens. Virtual existence is that what may happen. β. Every event has duration, and so starts, abides & ceases, so it may reemerge. An event is therefore not a momentary instance, a single element of what happens (or could happen), but a very short event-interval en.Δt = ent + dt of time (t) packed with actual happenings. Ergo, events cannot serve as the ontological principal. We need to move to a more fundamental level, and ask what constitutes a single event-interval ? Merely instances, moments or "droplets" of things actually happening. So, each time something is happening, there is an actual occurrence in the world. γ. Actual occasions happen in the world. They are per definition concrete, i.e. embodied by momentum, organized by laws and object of sense (or meaning apprehended by a possible observer & knower). What happens is the world is "the concrete" and there is nowhere "another world". The transcendence of this world, the world-ground, is not other-worldly, introducing a (Platonic) rift in ontology, positioning more than one ontological plane, but neither concrete, but merely abstract. There is only the world and so only a single ontological plane. γ.1 The world-ground is not a transcendent Real-Ideal, but abstracts of definiteness prefigurating, in terms of altering fields (frequencies) of likelihood, the world-to-come. The world-ground is merely the possibility of the next moment of the world, not another world, a "richer" ontological ground, nor a self-sufficient ground. It is the probability of actual, concrete happenings. But neither is this pre-existent abstract realm of propensities devoid of primordial momentum, architecture and Clear Light* sentience. It is a "nothingness" in which the possibility of becoming is afloat & intelligent ! γ.2 The world-ground contains the infinity of all possible (potential, probable) abstract prefigurations of all possible future worlds. This is its primordial architecture, form or information (creativity). But it also encompasses all virtual energy states (primordial momentum, matter) and all possible choices for unity & harmony (primordial sentience, consciousness). γ.3 The world-system is constituted by concrete actual occasions (the world) and by primordial formative abstracts (the world-ground). The world-system is all things actual & virtual (possible, likely, probable). δ. Let us call "actual occasion" a single droplet part of the many drops constituting a single event. Because this actually does occur, it is an actual occasion. Because this occurrence is worldly, it is concrete. Actual occasions are the basic elements of the togetherness of actual events and of actual, existing entities and they are individual & particular. δ.1 These instances never happen "on their own", but are always actualized in concert with others, shaping novel togetherness (creative advance). They depend on determinations and conditions foreign to their own dynamic characteristics or principal ontological properties. The latter are not a fixed, substantial core, but a given form of movement, a particular style of kinetography. δ.2 By virtue of its ontological properties (efficiency & finality), the fruit or effect of the kinetic style of a single actual occasion adds its own to the ongoing sea of process. As small changes may have huge effects, a tiny cluster of actual occasions can be enough to influence the whole movement. So all is dance, a display of energy from the base. The unit, principle or standard of a stream of events is therefore not the very short event-interval en, but its infinitesimal differential interval on.dt, the ultimate abstraction pointing to a single instance or isthmus of actuality. In terms of the ontological properties of an actual occasion, this singular, momentary droplet on has itself differential extension, i.e. is characterized by process on an infinitesimal scale. ε.1 Even on this immeasurably small scale, properties emerge. These ontological properties, attributes or aspects of any actual occasion (the smallest possible unit of change) are themselves a process (interdependent), not a substance, they do not constitute themselves but are constituted by others. These properties emerge as a result of the interplay between any two actual occasions. The differential moment has architecture and choice, in what, without this, would only be a barren transmission from this to the next actual occasion of the probabilities of momentum & position, a priori devoid of any creative advance. If this would be the case, then the novelty happening in the world could not be properly explained. ε.2 The jumps from virtual to actual (Big Bang), from the actual primordial soup to interstellar activity, from interstellar physics to biological systems, from biological organicity to sentience etc. evidence the evolutionary implications of the ontological properties of actual occasions, their ongoing creative advance. Starting with matter, the efficient determinations prevailed over the informational & sentient operators. When the basic order of the universe had been put in place, the further complexification of matter & information eventuated life, the possibility of negentropy, fertilization & instinct. Only at the far end of this evolutionary interval does sentience appear. ζ. The extensive plenum of the continuum of an actual occasion can be : (a) spatial : as in the case of geometrical objects ; (b) temporal : as in the case of the duration of mental objects ; (c) spatio-temporal : as in the case of the endurance of sensate objects. All actual occasions have this extensiveness in common. The extension of actual occasions over each other is crucial to grasp the possibility of the novel togetherness of actual occasions unfolding creativity and shaping the creative advance of the world. This horizontal passage of events or passing of Nature brings in temporality. For essentialism, the principle "operari sequitur esse" holds. This means every process is owned by some substance. Here one thinks substance first and then views change as accidental to it. Process thought inverses the principle : "esse sequitur operari" ; things are constituted out of the flow of process. So things are what they do. Change is thought first and things are momentary arisings, abidings, ceasings & reemergences of dynamical units. A process is an integrated series of connected developments coordinated by an open & creative program. It is not a mere collection of sequential presents or moments, but exhibits a structure allowing a construction made from materials of the past to be passed on to the future generation. This transition is not one-to-one, not merely efficient, for the internal make-up of its actual occasions shapes a new particular concretion, bears finality allowing for creative advance or novelty. Heraclites, thinking process first & foremost, avoids the fallacy of substantializing the world into perduring things like substances. Fundamentally, everything flows ("panta rhei") and although Plato disliked this principle ("like leaky pots" - Cratylus, 440c), he accepted it insofar as the "world of becoming" goes. Aristotle too saw the natural (sublunar) world exhibit a collective, chaotic dynamism. Change is fundamental, and the latter is the transit from mere possibility (potency) to the realization (act) of this potential, and this to the point of perfection ("entelecheia"). This makes Peripatetic thought pervasively processual. Of course, both Plato & Aristotle accepted the presence of substance, either as a fundamental transcendent reality or as inherently natural & biological (cf. hylemorphism). And both, although in a different way, accept the Greek prejudice for Olympic states (cf. Plato's "world of ideas" and Aristotle's view on contemplative knowledge/life, the "active intellect", the "Unmoved Mover" and the "actus purus"). In modern times, the standard bearer of process metaphysics  was of course Gottfried Wilhelm Freiherr von Leibniz (1646 - 1716). The fundamental units of Nature are punctiform, non-extended, "spiritual" processes called "monads", filling space completely and thus constituting a "plenum". These monads or "incorporeal automata" are bundles of activity, endowed with an inner force (appetition), ongoingly destabilizing them and providing for a processual course of unending change. And it was in the writings of Leibniz that Whitehead, the dominant figure in recent process thought, found inspiration. Like Leibniz, he considered physical processes as of first importance and other sorts of processes as superengrafted upon them. The concept of an all-integrating physical field being pivotal (cf. the influence of Maxwell's field equations). But unlike Leibniz, the units of process are not substantial spiritual "monads", but psycho-physical "actual occasions". They are not closed, but highly "social" and "open". Actual occasions, the units of process, are Janus-faced : they take from the past and, on the basis of an inner, finative structure, transform states of affairs, paving the way for further processes. They are not merely product-productive, manufacturing things, but state-transformative. Although indivisible, actual occasions are not "little things", but a differential interval of change "dt" explained in terms of efficient & final determinations, the vectors of change. Actual occasions are not closed (not self-sufficient like substances), but fundamentally open to other occasions, by which they are entered and in which they enter. Thus their perpetual perishing is matched by their perpetual (re)emergence in the "concrescence" of new occasions. These occasions always touch their environments and this implies a low-grade mode of sentience (spontaneity, self-determination and purpose). They are thus living & interacting droplets of elemental experience. They are part of the organic organization of Nature as a whole, but constitute themselves an organism of sorts, with an infinitesimal constitution of their own. Nature is a manifold of diffused processes spread out, but forming an organic, integrated whole. As was the case in the ontology of Leibniz, macrocosm and microcosm are coordinated. Not because each actual occasion mirrors the whole, but because they reach out and touch other occasions, forming, by way of complexification, aggregates and finally individualized societies of actual occasions. § 2 Efficient & Final Determinations of an Actual Occasion. α. Actual occasion x is momentary (at that instance) and actual, i.e. logically & functionally present here and now. Abstracted as standing alone, the differential interval "dt" out of which x is constituted has an extensive continuum, albeit momentarily. x has outer and inner relations of extension, i.e. in respect to other (earlier or future) actual occasions and to itself. These definite ontological particularities of each actual occasion involve extrinsic & intrinsic ontological properties. β. The extrinsic ontological properties of an actual occasion are the temporal-efficient connections of actual occasion x with the one before (x-1) and with the upcoming one (x+1). They happen in time, take space and operate certain determinations & conditions related to the momentum (energy or matter) of x. So they are called "efficient", i.e. directly bringing change (enhancing process). This gives the ongoingness of process its stream-like or wave-like characteristics. It is the exteriority of an actual occasion, its horizontal vector. γ. Intrinsic are : (a) the information (architecture, code, form, software) available to the actual occasion regarding other actual occasions, its informational weight and acquired degree of formal integration of data (in abstract operators) and (b) the weighed choice (sentience) successfully advantaging a certain efficient outcome by manipulating its probability-fields. γ.1 Ultimately, this choice aims to actualize the greatest possible unity & harmony in and for the forms of (novel) togetherness involving all possible other actual occasions. But due to lack of information and/or bad choices, this is mostly limited to the immediate environment and merely local interests. γ.3 Both informational & sentient operations define the interiority of an actual occasion, namely what it (momentarily) gathers as for itself (as a momentary "self" or imputation of subjective identity). This refers to the boundaries of actual occasions and what happens within them, to their particle-like or droplet-like spatiality and geometry. This is the vertical vector of an actual occasion, defining order & choice. δ. The extrinsic (efficient) & intrinsic (final) ontological properties of the ontological principal, defining two modes of existence of an actual occasion, only exist as long as the moment endures. But they do define the flash-like impetus of this ephemeral moment to the next, as well as the possibility of x to influence x+1. In the organic totality of the world, an actual occasion is the smallest unity of process. Each momentary occasion extols a perpetual va-et-vient between two modes of existence (or ontological properties) : an objective mode, in which it only exists for others ("esse est percipi"), and a subjective mode of existence, in which the actual occasion is none but subjective experiential properties ("esse est percepere"). In the first, objective mode, a physical experience is at hand, explained in terms of the horizontal vector of the action of efficient causation. In the second, subjective mode, a mental reaction ensues, bringing about the vertical vector of final causation. Actual occasions, contrary to Leibnizian monads, do communicate with other actual occasions. In terms of a logical order, an actual occasion "begins" with an open window to the past, showing previous actual occasions x-1, i.e. the efficient determination of the past world on it. Next, it responds to this past actuality physically. Simultaneously, it cross-wise puts into place its own current inner & dynamic ideality, drawing out possibilities of what was received and weighing the options in order to favour a single outcome by way of choice. By doing so, each actual occasion exercises final determination, showing differential self-determination, spontaneity & self-determination. The difference between efficient and final determination is analogous to the difference between actual and potential in quantum mechanics, brought about by the "collapse of the wave-function" (Bohr, Heisenberg, von Neumann, Schrödinger), turning an infinite number of potential possibilities (given by the vertical vector) into a single actual one (a singular horizontal vector). Choice ends the order of subjectivity, but the actual occasion does not perish. The end of its subjective experience is the beginning of its existence as efficient determinant on subsequent actual occasions, being the physical past entering their event-horizon, and reemerging there. Actual occasions are therefore never in "one place" or "solitary", but a forteriori enter in each other's process (togetherness or concrescence) and so define continua of occasion-streams. They are interconnected momentary events, not isolated (Olympic) enduring substances. Because of this inner, non-physical mode of existence, each occasion has a degree of consciousness (self-determination, spontaneity & novelty). This is not the same as saying occasions have an "inner life" in the way humans experience this. The subjective mode of actual occasions rules a weighing procedure effectuating a decision. And as the outcome of each actual occasion is richer than what physically, by way of efficient causation alone, would have entered its window of past actualities, novelty is possible. Because of this, creative advance ensues. § 3 The Three Operators. α. The two modes of an actual occasion (objective & subjective or efficient & final) encompass its three known aspects : matter, information & consciousness. These appear as integrated explanations of the functioning of the organic totality known as "Nature", "world" or "the concrete". They refer to specific descriptions (of theories and data) of irreducible but interdependent facets of each actual occasion. β. Efficient lawfulness and the objective mode of each actual occasion (the horizontal vector) call for the physical aspect of matter, while final determination and the subjective mode (the vertical vector) call for the aspect of abstract validation (information) and a degree of participatory self-determination (consciousness). β.1 These define ontological boundaries, allowing for a better understanding of the ongoing process of what is actually happening. These are not principles, or worse, substances, but merely aspects explaining physical objects, informational content, its value, and states & contents of consciousness. β.2 Each actual occasion has three distinct operational domains, encompassing the physical (matter) and the non-physical (information & consciousness) modes of occasions. These domains explain the operation of three functionally different societies of actual occasions, namely matter, information and consciousness. matter : hardware, sub-atomic, atomic, molecular, cellular, physiological, societies of actual occasions, encompassing particles, waves, fields & forces or the domain of the physical - the Real Numbers system ; information : software, embodied or disembodied notions, ideas, languages, logics, theories about actual occasions ; this is then the domain of the informational - the Natural Numbers system ; consciousness : userware, the self-determination, spontaneity, novelty & participatory sentient grasping of actual occasions or the domain of the conscious - the Complex Numbers system. β.3 The domain of the physical is not exclusively material. Indeed, the actual occasions constituting it do possess (on the most fundamental ontological level) information & sentience (but in a lesser degree). Likewise for the domain of the informational and the domain of the conscious. γ. General process ontology posits bi-modal actual occasions with their three functional domains as the ground of all possible phenomena, existing things, objects, entities or items. Each actual occasion has a physical (efficient, objective) and a mental (finative, subjective) mode ; its horizontal & vertical vectors respectively. The arising of actual occasions is caused by previous actual occasions, and this entry of past actual occasions in what happens hic et nunc is by way of efficient causation. The abiding of each actual occasion is its internal structure, causing choice, decision or self-determination. Whenever a choice is made, the actual occasion ceases, but this perishing brings about an efficient influence on the next actual occasion, and this influence has integrated the work of final determination by way of sentient manipulation of properties. δ. The three operational domains at work in every single actual occasion also operate on every scale of togetherness of actual occasions. Hence, it also applies to the world as a whole, and even extends to the world-ground, albeit in a primordial, virtual sense. In the case of the world-ground however, not being an actual occasion, these three do not refer to operational determinations & conditions but merely to the probability or virtual possibility of the latter. They are pre-existent probabilities for the rise of matter, information & consciousness and their creative concert.  The primordial conditions of the material aspect of the world explains how quantum events pop in and out of existence. They point to the primordial quantum plasma, i.e. a nothingness "potentized" to actualize and become some material thing. The primordial conditions of the informational aspect of the world are an infinite number of possible forms, architectures, codes or organizations likely to actualize when the proper material conditions prevail. ε.1 The primordial conditions of the sentient aspect of the world is the infinite consciousness of God* prehending all past and all current conditions and determinations of all actual occasions conjunctively and capable of (re)weighing the probabilities of material & informational objects. ε.2 This absolute consciousness also extends into the world, and so is the sole actual occasion continuously bridging the world-ground and the world. Insofar as this is merely the potentiality of the highest possible unity & harmony, it is primordial. Insofar as this is actual, it is moving along with every possible actual occasion and so manifest. God* is the ultimate exception. Specific process ontology applies this scheme of general process ontology on non-individualized compounds, aggregates or societies of actual occasions and on individualized societies of actual occasions. Let us see how this works in a neurophilosophy of process. There, in the two individualized societies of actual occasion at hand, namely the brain and the mind (cf. A Philosophy of the Mind and Its Brain, 2009), three irreducible domains or operators are constantly at work. These are derived from cybernetics, information-theory and artificial intelligence : • hardware or matter : the mature, healthy, triune human brain is able, as a physical object ruled by efficient determination, to process, compute and execute complex algorhythms and integrate all kinds of neuronal activity - the developed, individualized mind is able to be open to the efficient determinations resulting from previous moments of brain functioning ; • software or information : the innate and acquired software (wiring) of the brain, its memory & processing speed - the individualized mind is an expert-system containing codes or knowledge to choose from when solving problems ; • userware or consciousness : the mature brain works according to its own final determination, making choices to guarantee its organic functioning as a manifold and affect necessary changes in its environment - individualized consciousness or mind instantiates unified states of consciousness (moment to moment intentional awareness) as a percipient participator interacting meaningfully with its brain and the physical world. § 4 Aggregates of Actual Occasions. α. Entities and their elements, events, are actual occasions interrelated in a determining way in one extensive continuum. A single actual occasion is a limiting type of an event (an entity, actuality or object) with only one member. The world is thus built up of these actual occasions. Events are aggregates of actual occasions. Entities are aggregates of events. Because they cannot be divided, not found standing alone, but only conceptually analyzed when abstracted (on the basis of their extensive continuum), actual occasions are called "atomic". β. The organic togetherness of actual occasions has various ontological levels, shaping an ontological ladder ranging from actual occasions, events, entities, to insentient compounds and individualized societies with varying degrees of freedom. Not only is matter complex (cf. hylic pluralism), also information & consciousness are layered. β.1 Mere aggregates or compounds of actual occasions are not sentient. So traditional panpsychism, stating all possible things have a subjective mode, is not the case. Although the individuals part of an aggregate, namely the actual occasions themselves, do experience a infinitesimally small degree of self-unity, the aggregate itself does not. In terms of aggregation, rocks, rain, rivers, oceans, streets, cities, provinces, countries, continents, planets, artefacts, etc. are insentient. β.2 Lacking any self-conscious finality, unable to name themselves in a self-reflective cognitive act, aggregates are ruled by efficient law only. Actual occasions, mental or physical, come together to form events and events come together to form entities or existing objects. Mental objects are actual occasions mainly processing their inner, subjective, vertical vector, but they do have a minimal efficient determination, namely the "stream" of moments of consciousness. Physical objects are actual occasions mainly acting out their outer, objective, horizontal vector, but they maintain a minimal final determination, namely in the architecture of their particles, fields & forces as well as in their receptivity to the direction given by the conserving cause of the world (the immanent aspect of God*), in particular their "loyalty" to the natural constants necessary to maintain the intelligent design (of themselves & the world) intended by the "Anima Mundi", the perfection ("entelecheia") of the world. Non-individualized aggregates of actual occasions are unable to be aware of the totality of which they are a part. A rock does not know it is a rock. Individuality implies a view on totality and its unity. Like the bird knowing it is part of a flock. § 5 Individualized Societies. α. In individualized societies of actual occasions, interdependence and complex relationality engender negentropic dissipative systems. The most intricate of these is able to give a high-order degree of finality to the impulses of past efficient processes. Here human conscious life enters the picture, with each human being experiencing him or herself as a unity. But there are kingdoms lower than humanity exceedingly demonstrating their individuality, namely minerals, plants & animals. Are there kingdoms higher than humanity ? β. The crystalline architecture of minerals constitutes an intelligent factor, revealing a mathematical order at work behind what are merely interacting waves, particles, fields & forces. The photosynthesis of plants, their ability to multiply and specifically adapt to their immediate environments defines a higher degree of liberty and allows for their individualization. The behaviour of animals is already very advanced, and telling of their differentiation as groups and in certain cases as specific context-bound individuals within a group. Finally, the sentient behaviour of humans, able to produce abstract cultural objects and transmit them, invokes a very high degree of freedom. At every rung of this ontological ladder, we see the three ontological domains becoming more complex. With the emergence of sentience, individualization gives rise to naming, labelling and conceptualization. But this cannot happen without complex code and very sophisticated efficient determination. γ. The domain of consciousness may be organized in degrees of freedom, beginning with a singular actual occasion and ending with all individualized societies of occasions. γ.1 Subatomic particles, particles, molecules, tissues, natural kingdoms (mineral, plants, animals, humans) all possess a degree of consciousness. While sentient, they do not entertain an inner conscious life comparable to that of humans on this planet. γ.2 Such an intimate development of consciousness calls for a high-order complexification of mental actual occasions, one producing the complex, non-linear subdomain of human inner life. As on this planet this distinct type of sentient life is rare, all human life is by nature precious. γ.3 All other complex individualized societies of occasions do experience themselves as a unity run by a hierarchy, and so fall within the field of panexperientialism. Both aggregates and individualized societies are merely actual occasions, ongoingly oscillating between objective (efficiency) & subjective (aim), and described in terms of their material, informational and conscious properties. In aggregates, formed by the natural togetherness of actual occasions, actual occasions form events & objects barren of the experience of unity.  Every actual occasion happening in such a compound remains interlocked with all co-relative occasions, and this without a single dominant actual occasion or set of dominant actual occasions "leading the way". Because ontic hierarchy is absent, aggregates are not sentient, while their constituting occasions are (at their level). Nothing precludes the presence of more complex levels of consciousness, nor of other means to embody consciousness (cf. subtle, yet unknown, non-physical bodies, like the subtle "sheets" of the Indian yoga tradition). Hence, process ontology has no a priori regarding togetherness, interrelatedness & concrescence. Of course, the question remains whether speculations about non-physical life can be argued with a comfortable measure of validity ? On Earth, the highest level is the dominant actual occasion of experience constituting the human mind. As even actual occasions, with at least an iota of self-determination, provide the lowest-level example of the emergence of a higher-level actuality, we may understand, in comparison, brain cells as highly complex centres of experiential creativity. § 6 Panpsychism versus Panexperientialism. α. While individual occasions, which are not substantial, thing-like, but the common unit of process, possess, besides a physical, objective mode (efficient determination), also a mental, subjective, experiential mode (final determination), non-individualized aggregates or compounds of actual occasions do not manifest such a mental mode and are therefore insentient. They therefore mostly operate efficient determination and are physical, constituted by matter, analyzed in terms of particles, waves, fields, the four forces and the superforce, the infinite vacuum energy of the primordial quantum plasma (primordial matter). This infinite, undifferentiated energy is not an actual occasion. It is not concrete, cannot be abstracted, but is an abstract probability not without paradox. β. The (massive) presence of insentient objects rules out panpsychism, i.e. the claim all things live. This claim is not made. All things experience something, and this in a non-individualized way (as aggregates) or in an individualized way (as societies). Moreover, the mental, subjective mode of a single actual occasion has the lowest possible degree of freedom. As all objects are composed of actual occasions, all objects, at the deepest ontological level, possess differential sentience. This is panexperientialism. γ. The infinitesimal sentience of all possible actual occasions should not be compared with the activity of societies of actual occasions like the high-order conscious experience of human beings. Some societies of actual occasions are indeed individualized, i.e. share a self-image with an imago. Only when an actual occasion, by entering into another actual occasions (adding its concretion or internal make-up to others), helps bringing actual occasions together, can the creativity of the sea of process eventually give way to these individualized societies of actual occasions consciously experiencing their own unity and this at various levels of freedom & harmony (as in minerals, plants, animals, humans and metaphysical entities). γ.1 On this ontological ladder, the process of evolution and its natural creative selection is at work, producing more complex organizations of actual occasions interpenetrating each other. Because so many non-individualized aggregates can be identified, it is not the case all things are sentient. γ.2 Lots of objects, while composed of infinitesimally sentient actual occasions, are totally devoid of any sense of sharing a "self", an awareness of possessing a common imago. Ergo, panpsychism is not the case. All things are not sentient, nor are all things alive. δ. The organic togetherness of all possible actual occasions has various ontological levels, ranging from actual occasions, events & entities (or insentient compounds) to societies, individualized societies with varying degrees of freedom. δ.1 The highest level of freedom is the dominant actual occasion of what happens. On Earth, this is the human mind. δ.2 Actual occasions, with their infinitesimal iota of self-determination, are the lowest-level examples of the emergence of a higher-level actuality. This because of their creative input. This results from making the decision, characterizing their mental, finative mode part of the efficient determination, entering other actual occasions, appropriating data from its vicinity. ε. In terms of efficient determination, the mind emerged from the brain. But in terms of final determination, the possibilities offered by the brain are "weighed" and then chosen by the mind (emerged from the brain). Moreover, the emergent property (the mind as an actual entity in its own right), is able to  exert a determinate influence  of its own (both final & efficient). Mental causation is not an epiphenomenon, for besides the upward causation from the body to the mind, there is the self-determination by the mind, and on the basis of this, downward causation from the mind to the body. This is possible because mind and body are not two different kind of things, but both highly complex individualized societies of actual occasions, linked in a functional and interactionist way. ζ. For panexperientialism, "physical entities" are always physico-mental (or, what comes down to the same, psycho-physical). Focusing on efficient determination, and the emergence of an independent mental out of the physical, actual occasions are physico-mental. But insofar as final determination is concerned, and because of the downward causation effectuated by high-order minds on subtle physical processes, actual occasions are psycho-physical. Both are complementary. In the world, three major sets of specialized actual occasions are at work : matter, information & consciousness. These three give rise to the physical domain, the informational domain and the sentient domain respectively. These three constitute what actually happens in the world. Ontogenetically, the physical domain manifested first (with the Big Bang). Out of the unique singularity of this actual occasion (and its mental mode of finality) arose the expert-systems, the problem-solving architectures of the world aiming to bring about evolution-in-unity (complexifying homogeneity) in the ongoing physical processes. The interaction of matter and information gives ground to sentience to exert its ability to be aware of the momentum & architecture of objects possessed, grasped or apprehended by the knower and this in terms of the harmony of the unity between the known & the knower. These three ontological emergences are "outpourings" of specialized operational domains. The world-ground expresses the mere probability of the actual emergence of these ontological domains of the world. The world is sentient. Every actual occasion is sentient. But between this lowest sentient rung of the ontological ladder and the highest (the totality of all actual occasions prehended by a single immanent & totalizing absolute consciousness), many levels of insentient objects share in the togetherness of all actual occasions constituting the ongoing sea of process. This is why panpsychism is not at hand. Nor is the "nature morte"-view of the world as a set of "disjecta membra" retained. Both the physical mode (matter) as the mental mode (information, consciousness) of all possible phenomena are important. § 7 The God* of Process Ontology. α. God* is not the ultimate substance and final, absolute self-sufficient ground and self-settled self-subsisting essence ("esse subsistens") of all possible things. God* does not essentially (substantially) differ from the world. Although unique, God* is not the One Alone, the "idea" transcending all others, the "totaliter aliter" or "total other", the absolute absoluteness ontologically forever isolated from the world. God* is not absence of togetherness. He is not hidden ("Deus absconditus"). Under analysis, this "God" of reifying theology, this Creator cannot be found. One may conclude such a "God" does not exist. But God* exists, both primordial and immanent. β. God* is the unique non-temporal & non-spatial abstract actual entity giving relevance to the realm of pure possibility (primordial matter and primordial information) in the becoming of the actual world, encompassing both non-temporal everlastingness (as part of the formative elements) as temporal (recurrent) eternity (as ultimate actual entity operating in the world). Here we have a unique (paradoxical) abstract actuality, performing an unexcelled holomovement of holomovements, a unique solo, the Dance of dances. β.1 How can something acting on such a transfinite scale keep the world-ground exclusively "potential" ? Being part of the virtual world-ground, absolute sentience is defined as an actual occasion ! Is God* the unique, all-encompassing exception ? If so, how to maintain God* does not influence the world in terms of efficient determination, i.e. physically ? The spirit of criticism shuns the return of Caesarean Divinity, a God forcing its beings to kneel, bow and grovel at its feet. β.2 Does this mean God* poses a paradox ? Is Divine process para-consistent, implying the logic involving this unique actual occasion is not formal (or Aristotelian), with its linearity, but non-linear or able to efficiently organize certain inconsistencies in the fabric of conceptual reason itself ? Like quantum logic, not avoiding contradictions, but handling them in some way. β.3 Is this God* the object of nondual (non-conceptual) cognition only ? Lacking a mathematically perfect logic is however not absence of logic or no logic at all. Process theology is a branch of transcendent metaphysics and therefore impossible to validate by empirico-formal fact or by conclusive (i.e. absolute) argumentative justification of whatever sort. Its rules are a hermeneutics of mystical poetry, as indicated by "*" in "God*" or "Clear Light*". Lack of conclusive argument is however not absence of terministic argument. γ. God*, both potential & actual, both abstract & present, is the meeting ground of the actual world with the realm of the pure possibilities, one encompassing primordial matter and primordial information. This makes God* stand out in the world-ground. Not in the sense of any Divine Creativity, but by the possibility of infinite reorganization and an absolute consciousness (of which cosmic consciousness is but an instance linked with a given world). God*'s choice for unity & harmony has direct bearing on what happens in the world, albeit not by direct efficient determination, as omnipotence would have it. γ.1 Suppose omnipotence would be the case. The world-ground would then not be a mere abstract of possibilities (the possibility of the next actual occasion of the world), but the throne of an omnipotent God* able to hinder freedom, the creative outcome of the organisations of primordial information. Given freedom, and so novelty & creative advance, this cannot be the case. God* prehends all possibilities of energy & order, and merely gives relevance to these in the becoming of the world, but only acts by way of final determination, influencing (in terms of the domain of matter), physical outcome only indirectly by luring the propensity-fields of momentum, not by the spectacular, miraculous or supernatural way of a "Deus ex machina". One may argue God* has an indirect bearing on the world, but then merely as a Grand Architect forced to consider the material with which the Magnum Opus is done. world-system world actuality temporal & actual world concrete actual world-ground potentiality non-temporal & primordial sentience abstract actual primordial information primordial matter γ.2 God* is the anterior ground guaranteeing a very small fraction of all possibilities may enter into the actual becoming of the spatiotemporal world. Without God*, nothing of what is possible in terms of the world-ground, would become some thing, change and create in the world. The order and creativity of what happens in the world are the result of a certain valuation of possibilities. However, God* is not the world. Nor is God* the realm of pure possibilities. The "Lord of Possibilities" is not primordial matter, nor creative order. γ.3 Actual entities are concrete, while God* is an abstract actual entity. Creativity & the primordial quantum plasma  are non-actual formative elements, and therefore "pure possibilities". God*, creativity and the quantum plasma are the formative abstracts of the world. God* plays with loaded dice. δ. Consider God* has having two natures, called "primordial" and "immanent". δ.1 Primordially, God* is the instance grounding the permanence and continuous novelty characterizing the world. This does not call for substance, but for a infinitely perfect & ongoing symmetry-transformation valuating pure possibility. Allowing metaphysics to conceptualize such a special actual occasion, is opening up conceptual cognition to the standards of transfinite calculus and integrating the para-consistent treatment of paradox. δ.2 The primordial nature of God* has no direct impact on the physical stream of efficient determinations of the world. For although an actual entity, God*'s activity is "abstract", namely in the aesthetic (artistic) process of valuating the available pure possibilities of the creative order and the infinite sea of energy. Although engaged in the factual becoming of the actual entities, God* cannot be conceived as a concrete actual entity, a fact among the facts possessing direct efficient (physical) determination. Ergo, God* cannot be omnipotent. God* is the sole "abstract" actual entity ! Nevertheless, besides being abstract, God* is also a Divine consciousness prehending all actualities here & now. This is the immanent nature of the Divine. ε. God's primordial nature is transcendent, untouched by the actual world. This aspect is the "Lord of All Possibilities". It offers all phenomena the possibility to constitute themselves. If not, nothing would happen. By way of prehensive valuation, God* brings on harmony in all possibilities, for actuality implies choice & limitation. But as all order is contingent, lots of things always remain possible. The "ideal harmony" is only realized as an abstract virtually, and God* is the actual entity bringing this beauty into actuality, turning potential harmony into actual aesthetic value. In this way, God* directs matter indirectly. While not omnipotent, God* remains super-powerful. ε.1 For the order of freedom and responsibility to abide, omnipotence is logically impossible. Suppose God* were omnipotent, then why not prevent the Holocaust ? Due to so many powerful & concentrated evil NAZI intentions, God* could not immediately stop this bad architecture unfolding. The Divine is a Grand Architect, not the Creator of all things. Call this the Auschwitz-paradox : although an extremely powerful "Lord of Beauty", God* -confronting sentient beings exerting their "demonic" creativity- can not prevent this extreme falsehood, ugliness & evil to temporarily abide. Creativity itself is merely the material with which God* works, and cannot be manipulated "ex nihilo" or "ex cathedra". Likewise, the unacceptable and extremely unfortunate destruction of the innocent is the price paid for the freedom of destructive intent (consciousness) and disruptive togetherness (information & matter). ε.2 Evil, both natural (based on material & informational collisions) and moral (based on bad intent), is the outcome of annihilating togetherness, bringing out egology. The presence of friction & entropy do not preclude God* to balance out these unwanted effects in the future. Although at times evil is overpowering, in the end harmony always prevails. This is the Ghandi-principle. ζ. God* does not decide, but lures, i.e. makes beauty more likely. There is no direct efficient determination at work here, but a teleological pull inviting creative advance. Given the circumstances, a tender pressure is present to achieve the highest possible harmony. ζ.1 God* is the necessary condition, but not the sufficient condition for events. Classical omnipotence & omniscience are thus eliminated. God* knows all actual events as actual and all possible (future) events as possible. He does not know all future events as actual. This would be a category mistake. ζ.2 God* cannot hamper creativity, nor curtail energy. ∫ Falsehood, ugliness & evil are the outcome of the clash of freedom, of the presence of creativity. They are as sad as they are inevitable. η. Given all determining conditions determining things, the Divine purpose for each and every thing, and this on every rung of the ontological ladder, is to just be a contributor to the realization of the purpose of the whole, the unity of harmony in diversity. God* is the unique abstract actual entity making it possible for the multiplicity of events to end up in harmony, togetherness and unity. This aspect of God* is permanent (an ongoing holomovement or symmetry-transformation) & eternal (beginningless and nowhere). This holomovement never ends. ∫ God* is the Adî-Buddha ! The immanent nature of the Divine is God*'s concrete, omnipresent consciousness, actual near all worldly possibilities, actively valorising them to bring out harmony and the purpose of the whole, as well as conserving them as a totality, as a world, society, aggregate, event or actual occasion. θ.1 God*, with infinite care, is a tenderness loosing nothing. Hence, the Divine experience of the world changes. It always grows and can never be given as a whole. In this sense God* is always learning to untie the new knots, to unnerve unique conflicts of interest. θ.2 God* is loyal and will not forsake a single actual occasion. Infinitely intelligent and prehending all-comprehensively, God*'s experience grows and are so part of history. God* is not self-powered and not omnipotent. God* is not an impassible super-object, not a super-substance, nor a "Caesar" disconnected from and looking down on the world, but, on the contrary, changed and touched by what happens insofar as the immanent nature goes. Can process theology merely be another way to analyze the three Bodies of the Âdi-Buddha, the primordial Buddha representing the class of all Buddhas or awakened actual occasions "thus gone" (into holomovement) ? Are the differences between this Âdi-Buddha and the abstract concept of the "God* of process" not merely terminological & cultural ? The Truth Body of the Âdi-Buddha, the "dharmakâya" is a formless, undifferentiated, empty, nondual luminous field of creativity, out of which all possibilities arise. With a thoroughly purified conceptual mind entering the non-conceptual, such metaphysical poetry is not merely nonsensical, but the condensation of actual direct, nondual cognition. In itself, this Truth Body is unmoved and has no motivational factors to allow the Form Bodies to arise. The latter are "spontaneous" emergences. Likewise, creativity and God* are not causally related. God* does not create it, nor is creativity defined by what God* wants. Since beginningless time, the Truth Body is given, just as are unlimited creativity (primordial information) and the infinite (zeropoint) plasma (primordial matter). The Form Body ("rûpakâya") is an ideal form emerging out of the Truth Body for the sake of compassionate activity. In process theology, compassion is subsumed under beauty, for how can ugliness and disorder be compassionate ? God* makes certain definite forms possible by valuating the endless field of creativity using the key of unity & beauty. The Form Bodies are the two ways the Âdi-Buddha relates to ordinary, apparent events ("samsâra") : the Enjoyment Body is the ideal "form" with which the endless possibilities are given definiteness (God* as primordial), while the Emanation Body is the actual ideal "event" bringing this form down to the plane of physicality and concrete "luring" Divine consciousness (God* as immanent, manipulating propensities). The two natures of God* are not two ontological parts or elements, but two ways of dealing with the world. Primordially, God* is always offering possibilities and realizing unity, order & harmony. Consequentially, in these immanent ways, God* takes the self-creation of all actual events in this concrete world into account, considering what is realized of what is made possible. In these two ways, initiating & responding, permanent & alternating, we observe the bi-polar mode of God*, favouring a process-based, pan-en-theist approach of the actual world and its ground. Chapter 2. Mental Pliancy & its Enemies. Having established the general contours of this critical metaphysics of process, the quest for the most general, shared feature of the world and its sufficient ground may be prepared. What kind of mind is best able to do so ? A certain style and a transcendental logic embedded in a critical study of truth, goodness and beauty, definitely capacitate the conceptual mind by limiting it and thereby purifying it ; as it were preparing it for a speculation on process. Indeed, in the context of metaphysics, one of most fundamental mental operators is the constant remembrance of the impermanent nature of all phenomena, essentially devoid of self-settled substance ; a constant return to process, interdependence and relations, in other words to what is at hand hic et nunc. But be not mistaken ! This necessary preparation, offering a general overview or panorama, is only like clearing the ground, not yet the actual deed of planting the seed by nondual prehension. Therefore, to inspire the purified speculative mind, the latter must be made pliant. This is more than just being able to conceptually understand, but touches actionality, affectivity as well as all subtle and very subtle states of consciousness, like the direct experience of nondual states of mind. Without this pliancy, the mind is not open enough to attend to totalized objects and so generates a barren view. Optimalized, mental pliancy encompasses all modes of cognition. Mental pliancy is the property of a mind attending its objects exclusively as relations and no longer as relata. Then, the manifold of objects is treated with suppleness & subtleness. In the actual state of presence with what is happening right now, objects are never treated as ontologically isolated from other objects. Nonlocality is part of the hallucination, of the illusion (appearing before us). When this pliancy becomes ultimate, then the non-substantial, non-conceptual resting-place underlying conceptual logic & validity, at best attending truth, goodness & beauty, is at hand. This is a spacious, non-conceptual reality encompassing all phenomena likewise. Such enlightened mental pliancy is the ultimate manifestation of the dual-union of, on the one hand, process, and, on the other hand, lack of self-sufficient "substantia". Ultimate mental suppleness brings out the best of the mind : openness, depth, sharpness, acuteness, clarity, peace, power & wisdom. Speculative activity, being conceptual, cannot penetrate the nondual. Hence, for immanent metaphysics, only conventional mental pliancy pertains. To sufficiently inspire the conceptual mind so it constantly totalizes and grasps its objects with the highest possible degree of interdependence or relatedness, the speculative mind requires the highest possible degree of conventional mental pliancy. This generates the compassionate mind, actively engaged in actually ending the suffering of all other minds. Such a compassionate mind is needed to be able to produce or generate a valid immanent metaphysics. To explain the reasons why this is necessarily the case is one of the main goals of this chapter. Before achieving this, it must be clear what precisely the mind is all about. Three images assist in this : the stream, the mirror and the rainbow arching in space. Understanding these helps to establish a more stricter definition of mind as mere awareness & cognizing. • As a stream, the mind never stays the same, but neither is it without form or merely random. Indeed, what stays identical is not some solid feature establishing itself, but the architecture of change or kinetography of the mind. Different minds have therefore different kinetographics. Always moving, the mind is a dynamical phenomenon, not a static structure or architecture. Change due to constant momentum is the main characteristic of the stream. Such change, relating to all possible features of the mind, points to the mind being without any self-settled element or property. The mind is therefore empty of its own nature but other-powered, i.e. dependent on determinations & conditions of extra-mental objects. To the conceptual mind, succeeding moments of the stream constantly seem to flow in a temporal arrow from past via present to the future. This is the Arrow of Time. Such a mind attends itself in a special way, namely by positing a constant focus or point of reference & identity, called "I", "ego" or "self". The empirical ego is invented by the conceptual mind to position a certain contraction of awareness to a single moment of the stream. Awareness, in principle extended to the whole stream, is reduced to what happens on a small raft travelling on the stream ... From the vantage point of the ego on this simple flat boat, a temporal arrow pertains and the difference between mental and extra-mental is established on the basis of this seemingly fixed reference. However, if the limitations on attention imposed by the raft are left behind, and attention plunges into the stream to dive to its depths, it will eventually hit the original, very subtle layer of the mind. This is the underlying non-conceptual level, one encompassing the stream as a whole, a completeness devoid of any fixed or self-settled object. The mindstream or mental continuum is shared by all sentient beings possessing a mind. The image of the stream accommodates the view on all possible minds, except the nondual one. • As a mirror, the mind is empty of itself but merely reflects objects different than itself. Empty of itself, like the surface of the mirror, the mind is without memory, merely actual reflectivity. Without luminosity, reflections cannot appear on the surface of a mirror. The root of the mind, the very subtle mind is Clear Light*. Moreover, indifferent of what kind of objects appears, a Buddha or a pig, a mirror merely reflects without interpretation, i.e. without judging its attended objects. Interpretation is the work of a certain kind of mind, a concave or convex mind refusing to return to the Euclidian plane of the original, fully functional uncurved mirror-surface. This is the conceptual mind, distinguishing between objects turned inward (subjectivity) or outward (objectivity), and thereby establishing its special characteristic : afflicted duality, or a state of mind causing emotional afflictions and mental obscurations. The Factum Rationis and concordia discors brought to bare earlier are but special instances of this overall afflictive duality of the conceptual mind. This allows for the distinction between pre-conceptual, conceptual and non-conceptual minds. The first leads to innate self-grasping, the second to self-cherishing and acquired self-grasping. The image of the mirror accommodates the view on the nondual mind and none other. • As a rainbow, the mind results from complex determinations & conditions. As the tiny water droplets reflecting in the sunlight, it takes on the colour of the glass in which it is poured. We observe a specific hue and forget this is merely a refraction or curvature of white light. A given frequency is always the absence of all other frequencies. Like a pure transparent crystal or diamond, the mind reflects what it attends. The conceptual mind does this in terms of its specific colours, the non-conceptual mind in tune with the brilliant whiteness of the Clear Light*. The rainbow seems solid and real, but in truth it is merely a spacious phenomenon. As the rainbow seems to connect Earth with heaven, the mind is the only bridge available to cross the chasm separating the conventional from the ultimate. Because of the mind, the end of suffering or salvation from all afflictions and obscurations is possible. Without this true peace, the play of seemingly endless suffering endures. Just as a rainbow is a set of colours, so the mind is a set of possibilities. Just as the rainbow disappears all of a sudden, so states of mind constantly change, at is were leaving no trace. The image of the rainbow accommodates the view on all minds, i.e. the simultaneity and unity between all conventional and all ultimate minds. Buddha mind. The enemies of mental pliancy are ignorance and afflictive emotions. The former betrays a lack of insight in the true nature of phenomena, the latter manifests as the fire of the existential dialectic between exaggerated attachment or afflictive desire and revulsion or hatred. Ignorance superimposes a false idea, promotes a false ideation Cf, designates a wrong view. This impacts all possible cognitive acts. Afflictive desire & hatred denote affective activities acting as root-causes for all subsequent major afflictions of the emotional mind : cruelty, greed, stupidity, passion, jealousy & pride. This directly affects intersubjectivity and therefore our degree of civilization. Studying these emotional states, one discovers their pivot is the notion of an enduring phenomenon. Human beings acquire this habit as the result of attributing a concept or a name to anything observed. Animals have a non-conceptual innate sense of self. This instinct is however not intuition born out of the purification of the conceptual
4c2e1b922e8efca9
• Chemistry & Biochemistry • Courses 1010C Essentials of General Chemistry 4 credits Introduces students to the essential theories and principles of general chemistry and their application to modern society. Topics include chemical reactions, atomic and molecular structure, stoichiometry, bonding, the periodic table, acid-base theory, equilibrium, properties of gases, liquids and solids, and kinetics. The lecture course emphasizes problem-solving techniques while the laboratory portion introduces students to the methods of scientific investigation and basic laboratory techniques. (lecture: 3 hours; lab: 2 hours) Laboratory fee. 1045C, 1046C General Chemistry 4 credits Lecture and laboratory course for students going into the biological, chemical, health, or physical sciences. Atomic structure and stoichiometry; properties of gases, liquids, and solids; thermochemistry; quantum theory; electronic structures of atoms and molecules; chemical bonding; properties of solutions; thermodynamics; chemical equilibria including acid¬base and solubility; kinetics; electrochemistry; nuclear chemistry. Laboratory experiments enhance understanding of principles taught in lectures. Emphasis on quantitative techniques; computer interfacing and spreadsheet applications. Second semester includes semimicro qualitative analysis. (lecture: 3 hours; recitation: 1 hour; lab: 3 hours) Laboratory fee. 1125C Analytical Chemistry 4 credits Theory and practice of classical and modern analytical chemistry. Laboratory applications of volumetric, gravimetric, and instrumental methods including potentiometry, spectrophotometry, and chromatography. One laboratory hour is a conference hour. (lecture: 2 hours; lab: 5 hours) Laboratory fee. Prerequisite: CHEM 1046C. 1213C Organic Chemistry I 5 credits .The structure, properties, synthesis and reactions of hydrocarbons and alkyl halides, reaction mechanisms, stereochemistry. A brief discussion of carboxylic acids, their derivatives, carbohydrates and amino acids. Laboratory experiments are designed to illustrate methods of separation, purification, identification, and synthesis of organic compounds. Spectroscopic measurements and molecular modeling are included.  (lecture: 3 hours; recitation: 1 hour; lab: 4 hours) Laboratory fee. Prerequisite: CHEM 1046C. 1214R Organic Chemistry II 3 credits Conjugated unsaturated systems, aromatic hydrocarbons, structure, properties,     syntheses and reactions of the main classes of organic compounds, spectroscopy, polymers and compounds of biological importance. (lecture 3 hours; recitation 1 hour. Prerquisite: CHEM 1213C) 1376R Biochemistry—Lecture 3 credits Structure and function of biomolecules; kinetics and mechanism of enzymes; bioenergetics and metabolism; membrane structure and dynamics; signal transduction. Prerequisite: CHEM 1213C or permission of the instructor. 1377L Biochemistry Lab 2 credits Laboratory experiments are designed to illustrate methods of purification, separation, and characterization of proteins; acid-base titration of amino acids; biomembranes; enzyme kinetics; molecular modeling, computational chemistry, and bioinformatics of biologically relevant molecules.Prerequisite: CHEM 1376R. 1415R Physical Chemistry— Lecture 3 credits Thermodynamics, chemical equilibrium, solutions, electrochemistry. Applications to biological and biochemical problems are used to illustrate general principles.Prerequisites: CHEM 1046C; MATH 1412 (or higher)  1416R Physical Chemistry— Lecture 3 credits Quantum chemistry; the Schrödinger equation and some simple applications; extension to three-dimensional systems; H¬atom; many electron atoms; structure of molecules; introduction to computational methods (molecular mechanics, ab initio methods); molecular spectroscopy; statistical mechanics; kinetic theory; chemical kinetics. Prerequisites: CHEM 1046C; PHYS 1031C or 1041C; MATH 1413 1930; 1931 Current Topics 2 or 3 credits Selected subjects in chemistry. Discussion of current developments, problems, and literature. Open to seniors and selected juniors majoring in chemistry.Prerequisite: permission of the instructor. 1937 Seminar in Advanced Chemistry 1 credit Topics in all fields of chemistry presented by students and guest lecturers. Seminar meeting two hours every two weeks. Pre¬ or co-requisite: CHEM 1214R or permission of the instructor. 4901, 4902 Independent Study See Academic Information and Policies section. Laboratory fee on an individual basis. Yeshiva University 500 West 185th Street New York, NY 10033 Connect With YU
d875ec8391212f18
Kirkus Reviews QR Code BANKRUPTING PHYSICS by Alexander Unzicker How Today's Top Scientists Are Gambling Away Their Credibility by Alexander Unzicker, Sheilla Jones Pub Date: July 30th, 2013 ISBN: 978-1-137-27823-4 Publisher: Palgrave Macmillan With assistance from science writer Jones (The Quantum Ten, 2008), theoretical physicist and neuroscientist Unzicker compares the current state of theoretical physics to a bubble economy. "Governments can delay an economic disaster by printing money,” writes the author. “Physics, to avoid the bankrupting of its theories, can resort to experiments with ever-higher energies.” Unzicker buttresses this statement with further accusations, taking special aim at peer reviewers who black ball " 'risky' ideas that run contrary to established views…while boring, technical papers are usually waved through." While carefully separating himself from cranks who deny special relativity or quantum theory on the one hand and religious fundamentalists on the other, the author offers a broad dismissal of modern theoretical physicists, whom he accuses of having "gotten lost in bizarre constructs that are completely disconnected from reality, in a mockery of methods that grounded the success of physics for 400 years." Unzicker also targets the massive expenditures of funds on high-energy particle accelerators. Unfortunately, the author's invectives are not matched by equivalent scientific depth. He simplifies the complexities of quantum physics and the Schrödinger equation to a "sophisticated technique, which boils down to the same math one uses to measure how springs—just like your Slinky—oscillate in three dimensions," and he ridicules attempts to explain anomalies in astronomical data by inferring the existence of dark matter and dark energy, comparing them to Ptolemy's use of epicycles to describe planetary orbits. He also disparages the failure of modern science to explain the discrepancies in size of fundamental forces such as gravity and electromagnetism. Unzicker unsuccessfully attempts to bolster the credibility of his own sweeping generalizations by claiming the mantle of esteemed physicists such as Roger Penrose and Lee Smolin, who seriously question the direction of current theory.
c6ff4f41034fa1a6
Lorentz invariance 1. In electrodynamics, the Coulomb gauge is specified by [tex]\nabla \cdot A=0 [/tex], i.e., the 3-divergence of the 3-vector potential is zero. This condition is not Lorentz invariant, so my first question is how can something that is not Lorentz invariant be allowed in the laws of physics? My second question concerns the photon polarization vector of a photon of 3-momentum k. Is this polarization vector a 3-vector or a 4-vector? If it's a 4-vector, what is the time component of the vector? The only condition seems to be that the 3-momentum k is perpendicular to the space-components of the polarization vector. My last question is this. Suppose your photon has 3-momentum k entirely in the z-direction, and in your frame of reference the 4-vector polarization e=(0,1,0,0), i.e., entirely in the x-direction. If you Lorentz boost your frame in the x-direction, then this 4-vector will receive some time component, say e'=(sqrt(2),sqrt(3),0,0). So when calculating a scattering amplitude, how do we know what the time component of our photon polarization vector is? In field theory, if the photon polarization vector has a non-zero time component, then the time component of the source, J0, plays an important role. However, J0 is associated with the scalar potential [tex]\phi [/tex] (they are conjugate variables). Does the scalar potential and charge density really matter in field theory, or is just the 3-vector potential and 3-current important? 2. jcsd 3. alxm alxm 1,845 Science Advisor Well, the Schrödinger equation isn't Lorenz-invariant either, but we certainly use it a lot! It's allowed because if the relative velocities of the interacting particles is small, the speed of light is "infinite" to a good approximation. The corrections for a retarded potential (AKA the Breit interaction, in an atomic system) are typically fairly small. 4. Avodyne Avodyne 1,358 Science Advisor The physics is gauge invariant (that is, independent of the choice of gauge condition), so it's OK to choose a non-Lorentz-invariant gauge condition. The polarization is a 4-vector, and its dot product with the 4-momentum must be zero. In Coulomb gauge, the space components are orthogonal as well. So, in Coulomb gauge (but not in other gauges, in general) the time component of the polarization 4-vector is zero. If we start in a non-Lorentz-invariant gauge, then boosting takes us out of that gauge. So if you're going to specify Coulomb gauge (in which time components of polarization vectors are zero), then you're not allowed to boost. They absolutely matter. In Coulomb gauge, you get an explicit Coulomb interaction among pieces of the the charge density at different places. 5. thanks all, that made sense. If you experimentally prepare a photon, don't you always have to prepare it in the Coulomb gauge? That probably didn't make sense, since gauge is not physical. But what I mean is if you know a photon has a certain wavelength and direction and polarization, then where's the time component? Have something to add?
1edc9fa916718bef
Quantum Matter Animated! by Jorge Cham – “I don’t remember anything I learned in college” Screen Shot 2013-06-11 at 12.15.21 AM Screen Shot 2013-06-11 at 12.16.55 AM Screen Shot 2013-06-11 at 12.17.20 AM Watch the first installment of this series: Transcription: Noel Dilworth Thanks to: Spiros Michalakis, John Preskill and Bert Painter 65 thoughts on “Quantum Matter Animated! 1. Is it still not possible that the laser gave some part of it’s energy to the mirror? Is it possible to detect such small instantaneous rise in temperature (which will be dispersed to the surroundings within fraction of a second as it is maintained at 0K) ?? Because if it is not completely possible to measure such small changes in temperature in such small time then how can we be sure that the red shifted laser is NOT due to the laser giving off it’s energy ?? And if this is the reason then this still does not prove that the mirror was vibrating. It started vibrating only after being hit by the laser. But due to temperature dispersal the mirror was instantly damped and brought again to zero vibrations or ground state. • I am not a physicist, but the intuitive answer to your question is that if the laser were imparting energy to the mirror, and that was where the red shift was coming from, then there would still be a corresponding blue shift. • Right. When the oscillator is in its quantum ground state, it can absorb energy but cannot emit energy because it is already in its lowest possible energy state. Reflected light can be shifted toward the red (have lower energy than the incident light, because the oscillator absorbed some of the incident energy), but cannot be shifted toward the blue (have higher energy than the incident light). That’s what the experiment found. • Just a follow-on to John’s response… The inability of the mechanical resonator to give up energy when it is in its lowest energy state seems like an obvious statement (by definition of “lowest energy state”), and so why is the experiment interesting then? All it did was confirm that indeed this energy emission goes away as the object gets colder and colder and approaches its ground (lowest energy) state. It is really the fact that the mechanical resonator can absorb energy when it is in the ground state that is interesting. The classical description of the motion of a mechanical object has no way of allowing for this asymmetry in the emission and absorption of energy with the environment; the processes must be symmetric and zero when the object is not moving at temperature=0K. Think of it from the stand-point that the mechanical object isn’t moving when in its classical ground state, and thus it is not doing work on its environment and the environment is not doing work on it. That is what makes the quantum description of the ground state of motion interesting; it allows for the asymmetry in the process of emission and absorption of energy by the mechanical resonator to (or from) the environment. I like to make the analogy to the spontaneous emission of light from an atom, in which there is no corresponding spontaneous absorption process of light. A well defined “mode” (think of it as a particular direction and polarization) of light can be described by a similar set of quantum equations as that describing the mechanical resonator, and thus also has a ground state with intrinsic fluctuations. These “zero-point fluctuations” or “vacuum fluctuations” can be thought of as triggers for atomic spontaneous decay and emission of light by the atom, but do not cause the reverse process of spontaneous excitation of the atom. [Aside: This used to really mystify me when I first learned about spontaneous (and the related stimulated) emission of atoms. The excellent little book by Allen and Eberly, does a nice job of de-mystifying the vacuum fluctuations.] A nice description of the above argument is also given in Aash Clerk’s Physics Viewpoint accompanying article: • Hi Oskar, John, and Paras: 0. For some odd reason, while fast browsing, I first read Oskar’s reply, and then John’s, and only after both, Paras’ original question. (Oskar’s was the longest and innermost indented reply, and so it sort of first caught the eye in the initial rapid browsing.) Even before going through your respective replies, I had happened to think of what in many ways is the same point as Paras has tried to point out above. … Ok. Let me put the query the way I thought of. 1. Here is a simple model of the above experimental arrangement, simplified very highly, just for the sake of argument. The system here consists of the mechanical oscillator and the light field. The environment consists of the light source, the optical measurements devices, the cooling devices, and then, you know, the lab, the earth, all galaxies in the universe, the dark matter, the dark energy … you get the idea. The environment also includes the mechanical support of the oscillator, which in turn, is connected to the lab, the earth, etc. *Only* the system is cooled to 0 K. [Absolutely! 😉 Absolutely, only the system is cooled “to” “0” K!!] The measurement consists of only one effect produced by the light-to-the mechanical oscillator interaction: the changes effected to the reflected light. This effect, it is experimentally found, indeed is in accordance with the QM predictions. (BTW, in fact, the experiment is much more wonderful than that: it should be valuable in studying the classical-to-QM transitions as well. But that’s just an aside as far this discussion goes.) 2. Now my question is this: what state of |ignorance> + |stupidity> + |insanity> + |sinfulness> [+ etc…] do I enter into, if I forward the following argument: At “0” K, the system gets into such a quantum mode that as far as the *reflection* of the light is concerned, if “I” is the amount of the incident light energy (say per unit time), then only some part of it (i.e. the red-shifted part of it) is found to be reflected. However, there still remains an “I – R” amount of energy that the system gives back to the environment via some *experimentally* unmeasured means. If it doesn’t, the first law of thermodynamics would get violated. We may wonder, what could be the form taken by such an energy leakage? Given the bare simplicity of the above abstract description as to what the system and environment here respectively consist of, the answer has to be: via some mechanical oscillation modes of the mechanical oscillator that we do not take account of (let alone measure) in this experiment. The leakage would affect the mechanical support of the oscillator, which, please note, lies *outside* of the system. [The oscillation modes inside the system may be taken to be quantum-mechanical ones; outside of it, as classical ones. But I won’t enter into debates of where the boundary between the quantum and the classical is to be drawn, etc. As far as this experiment—and this argument—goes, we know that “inside” the system, it has to be treated QMechanically; outside, it’s classical mechanically; and that’s enough, as far as I am concerned!] Since the system here is not a passive device but is *actively* being kept cooled down “to” “0” K, it means: it’s the “freeze” sitting in the environment, not to mention the earth and the rest of the universe, which absorb these leaked out vibrations of the mechanical oscillator. The missing energy corresponds to *this* leakage. 3. Of course, I recognize that my point is subtly different from Paras’. His write-up seems to suggest as if there is an otherwise classically rigid-body oscillator sitting standstill, which begins to vibrate only after being hit by the laser. In contrast, I don’t have this description in mind. He also seems to think rather in terms of a *transient* damping out of the mechanical oscillations. Though I do not rule out transients in the system, that wouldn’t be the simplest model one might suggest here: I would rather think of the situation as if there were a more or less “steady-state” leakage of the missing energy into the environment. Yet, Paras does seem to appreciate the role of the environment—the unmeasured side-effects, so to speak, that the system produces on the environment. 4. Anyway, I would appreciate it if you could kindly let me know in what final state should I collapse: |ignorance> or |stupidity> or … . And, why 🙂 [BTW, by formal training, I am just an engineer. And, sorry if my reply is too verbose and had too many diversions…] Thanks in advance. • About your parts (2) and (3), I think it is easier to think of it this way: At the low temperatures the system is subjected to (I really don’t think it even makes sense to say that “only the system is cooled down to 0K”; instead, just say that the system is cooled down to low temperatures is enough), a lot of the system’s constituent particles are in their ground states. What is happening in this experiment, is that they are observing that absorption and excitation of constituent particles up from ground states is observed without the corresponding “classical” de-exciting reflection wave that you normally get. This is predicted from the quantum physics. The special thing about this experiment, though, is that they are also saying that the entire system itself, a macroscopic body, has a quantum wavefunction just like their microscopic parts. That is the part that is interesting and worth reporting upon. Because, if a macroscopic body has a quantum wavefunction, then it can also do all the rest of the quantum weirdness, and that applies to us humans, the Earth, being able to, say, perform quantum tunnelling. Once you see the experiment in this way, it is then obvious that the loss of energy that you perceive, is merely the spontaneous emission of light by the excited particles, and, in this way, they drop back into the ground state of the entire system. This is important, because spontaneous emission is basically undetectable in our case, which is what the experiment observed. The point is that, classically, you are supposed to observe substantial energetic reflection (along with the spontaneous emission that you cannot remove), and you do not observe that in this experiment. 2. Could you add a link to the paper about the experiment for those readers who want more details about it? 3. Whenever someone asks me for a book to explain quantum mechanics to laymen, I always point them to this: It’s an illustrated book about the history of quantum mechanics created by Japanese translation students studying english. They chose the topic because they needed to be able to accurately translate relatively technical material. It’s wonderful for answering the questions you raise in the post above. 4. Great video describing a really interesting experiment. However, it is far from reaching the important lessons from Quantum Mechanics that have shaped the way we see the universe. Forget Quantum Computing. I am not saying that Quantum Computing is not sexy or something, but it is not where the paradigm shift is. One of the greatest thing that a Physics undergraduate degree forces you through, is to learn about Condensed Matter Physics. You might think that, in contemporary Physics education, they would certainly teach you both Quantum Mechanics, and General Relativity. After all, they are what we call the new world view, that revolutionised how we as a species see ourselves. The truth, however, is that, if I did not force them to teach me, they would have ignored General Relativity and just taught me Quantum Mechanics. Lots of it. Without motivation. It is only at the end of the Physics degree do you get to see why it is arranged in the way it is. Special Relativity, the one that Einstein published in 1905, is a really easy thing. Yes, it is bizarre, but you can easily teach it, and later on, you can tell students to apply what they have learnt. That it is reducible into small equations that are easy to memorise, is another plus point. General Relativity, on the other hand, is a pain to teach — everybody, mathematician or physicist, would be confused by the initial arguments, the mathematical notation and all that jazz, until you have completed the module. And even after that, some people just never get it (although, luckily, it is simple enough that a large chunk of people actually understand it very fully, to the contrary of Eddington’s bad joke). The deal breaker, however, is that the ideas from General Relativity, although a nice help to the other parts of Physics, is very far away from essential. i.e. People can make do without any knowledge of that, and still contribute to the rest of Physics in a proper way. That is not the same with Quantum Mechanics. The standard way they teach Quantum Mechanics these days, is to throw the mathematics at you, right at the start. Just write down your energy equation (that you can remember from high school), do your canonical quantisation (which is nothing other than replacing symbols you know about with derivative signs; a monkey can do that), and tack on something magical that we call the wavefunction, and Viola, Quantum Goodness! Since there is nothing to actually understand about it, I watched in amusement as everybody around me struggled to understand something out of nothing, congratulating myself for actually knowing the meaninglessness of it all. Boy, what do I know? The next module, aptly named “Atomic and Molecular Physics”, looked like nothing but applications of the mathematics learnt. It was HORRIBLE to go through, especially since it looked like vocational training — approximation and other calculational techniques that are hardly useful outside higher and higher corrections to the properties of materials that classical physics could have found out about (except quantisation, of course). It was important to have learnt it (not least because it was the first place in which Quantum Entanglement was taught), but it felt like we are just learning tricks instead of ideas. Statistical Thermodynamics was better. Building upon Thermal Physics in first year, there was a bit of Quantum effects being shown in action, especially the Quantum Degeneracy pressure that keeps stars the size they are. Then BOOM! Condensed Matter Physics (I learnt it under the older name, Solid State Physics). I had to completely rewrite what I thought I had known about Quantum Theory, for it is obvious I knew nothing. I am sure you guys have heard of the adage: “When stuff are moving fast, are large, or heavy, General Relativity cannot be neglected. When things are small, Quantum corrections cannot be neglected.” It is still true, but there is a sleight of hand here — we have yet to define what it means to be “large” or “small”. In particular, whenever you have a lot of material squeezed into a small space, i.e. high density, it is small. Thus, something can be both large and small at the same time, requiring both General Relativity and Quantum Theory to describe. A black hole is one such object. The name “Condensed Matter”, is a really good one. Any liquid or solid, really, is condensed, so condensed, that actually, it is no longer a classical system — the quantum effects DOMINATE. Without incorporating Quantum Theory right into the heart of it all, nothing you calculate even makes sense. And since our first approximations here beat the best classical calculations left-right-centre, there was also no reason to teach the classical approximation techniques either. Specifically, notice how, in high school, people teach you that heat and sound are just atoms moving about in different ways? Classical theories can talk about heat propagation and sound propagation and motion. But they are three different islands that don’t even make sense together. So different, that even their mathematical tools are different. But in Quantum Theory, the same mathematics describe all three as one united whole, on the zeroth approximation, and even give you dispersion, which is something classical theories cannot explain without complicated methods. After being floored by how it actually is done, the icing on the cake is Transistors. The theory was originally made in order to explain how metals behave, and we talk about a free electron gas, to explain how metals conduct so well. So, it came as a complete shock that any improvement, notice, ANY SIMPLE improvement to the free electron picture, be it Nearly Free Electron model, or the Tight Binding model, energy bands appear. In practical terms, the theory that sought to explain metals, now explains insulators, and even more, predicts the existence of this previously unheard of class of materials, known as semiconductors. Indeed, it does even better. It predicts the existence, how to make them, and how they would be useful. It is the first time that Physics THEORY had been faster and earlier than the experimenters at any topic. So, yeah, while you are enjoying your computers reading this piece, appreciate the sheer ingenuity and wonder that is brought to you by the Quantum revolution. Please alert Jorge to this. He can do wonders with information. Sound propagation and bulk motion can be treated the same way, because they are both forms of characteristic wave propagation and show up as eigenvalues of the same equation set. Heat transfer, viscous momentum transfer, and diffusive mass transfer all work basically the same way, because they are closely related effects of the same basic process. All of this can be derived in a unified framework using the principles of classical kinetic theory, because all of it is inherent in the Boltzmann equation. It’s true that you need quantized internal energy states to accurately predict something as simple as the temperature dependence of specific heat in a gas. But it seems to me that you are somewhat exaggerating the shortcomings of classical physics. I am really doubtful of that. The reason is that the mathematical apparatus is just not the same. For the propagation of heat, you have the heat equation in classical physics, with the propagation constant kappa. For sound, the Harmonic approximation gives rise to a fixed speed of sound, which you later improve upon by adding anharmonic terms so that the speed of sound becomes a variable. Those two constants are not the same. Granted, they are dimensionally inconsistent, but the fact is that you have to treat them rather differently. The reason for this discrepancy is that sound propagation exhibits higher frequency dependency, so that it is easier to look at one frequency. Heat, on the other hand, is usually averaged over in the context of classical heat propagation. This makes it really complicated, as you have to average over both spacetime and weight them according to the probabilities of being in so-and-so states. Note that this last thing is also itself temperature dependent, so classical physics is crazy. Nothing stops a person from combining the heat and sound contributions in classical physics, but they are like Frankenstein combinations — oh, this contribution is for heat, and that for sound, and this for their interaction. That is very different from truly unified descriptions in Quantum theory, where it is one term, and one term only, that we are looking at. Because of that, I do not think I am exaggerating the shortcomings of classical physics. It simply is not a unified framework, although it is frequently possible to push approximations in classical physics to really high orders of accuracy. That, I can give, but not unification. And even then, one should notice the tremendous difference in the mathematical methods involved. Yes, both approaches would heavily depend on Fourier analysis, but that is just about their similarities. Instead, a knowledge of the approximation techniques in classical physics is only useful for the continuum free-space approximation of the transport of various quantum objects, whereas proper quantum approximation techniques is frequently simpler than the classical counterparts Finally, bulk motion is very different from either of sound nor heat in any case, except the fact that they are all of zero frequency (actually, this is how the normal mode mathematical technique announces its own failure, and there are ways to compensate formally). Luckily, it is seldom a problem that this is happening — after all, bulk motion would, somewhat, be better treated with relativistic methods. • I suspect we’re talking past one another a bit here. I’m a fluid dynamicist. I’ve studied some advanced solid mechanics and continuum mechanics, but mostly I’m a fluid dynamicist. When you say stuff like “bulk motion is very different from sound”, I think of the underlying physical principles, because in the derived practical equations I use this is not true. But when you say stuff like “the heat equation in classical physics, with the propagation constant kappa”, I think of the phrase “toy equation”. Even in the engineering form of the heat equation, or the Navier-Stokes equations for a linear isotropic fluid, kappa is a coefficient, not a constant (though turbulence modellers generally ignore its thermodynamic derivatives). And it doesn’t show up at all in the Boltzmann equation, unless you do the math and derive it. Regarding “unified framework”, I expressed that poorly. Sure, in the engineering equations, first-order fluxes like acoustic propagation and bulk motion are handled differently than second-order fluxes like heat transfer and viscous stress. This is because their behaviours are different, so the simplest reasonably accurate mathematical descriptions of them will unavoidably be different. But it should never be forgotten that they can both be derived from the same statistical mechanical representation. It strikes me that what the Boltzmann equation is to fluid mechanics is somewhat analogous to what Schrödinger’s equation is to quantum condensed matter physics (though it isn’t quite as fundamental). The general form isn’t very useful by itself, but specializations and approximations can produce good enough results to translate into engineering equations. The key to the Boltzmann equation (assuming you have enough dimensions to describe all important degrees of freedom) is the collision operator, which could be said to be analogous to the Hamiltonian in the Schrödinger equation. The collision operator describes all interactions between particles and is very difficult to specify exactly for real physical systems, though a number of popular approximations exist. I gather this is a bit different from the quantum-mechanical approach you’re talking about, where a lot of condensed systems can be described surprisingly well with “noninteracting” approaches… People have tried to use the Boltzmann equation (with or without quantum effects) to model solids, with mixed success. It seems to be best at fluids, especially gases and plasmas, perhaps because the molecular chaos assumption is difficult to remove. Look, I’m not claiming that quantum physics is no better than classical physics. But you seem to be saying “classical physics” when you should be saying “classical engineering approximations”, and then drawing conclusions based on the conflation of the two. Comparing the Schrödinger equation to something like the Navier-Stokes equations, never mind the heat equation, is apples-to-oranges. You can actually derive all of the basic principles of fluid mechanics from Newtonian mechanics, without even referencing electromagnetics, though your accuracy won’t be very good… I shouldn’t have gotten involved. I have a segfault to chase down… • I better see where you are coming from. You are clearly talking about deeper stuff, and good luck with your segfault. However, I do not think that your argument is convincing enough. Yes, it is possible to derive fluid equations and so forth from Newtonian mechanics. The problem still persists, however, that after the derivation (in which kappa turns out to be a derived quantity and actually not a constant), that the treatment of heat and sound needs to be done as stitched patchworks on top of the same fundamentals. As you rightly noted, I was saying that you don’t treat it that way in quantum physics, and it is quite important to see how it is actually handled differently. Also, the “proper” way to deal with interacting quantum systems is to couple them. For example, phonons and photons, by interacting, means that a proper treatment is to deal with waves that are half-phonon and half-photon and then quantise them yet again. This is completely different from how classical approaches tackle these problems. Yet again, I have to reiterate that, I am not saying that you cannot get good results from classical considerations. What I am saying, is that, due to how classical ideas actually arise from quantum fundamentals (namely, that everything classical tends to just be the conflation of modal [as in, most probable] behaviour as the _only_ [or mean, if you are talking about bulk stuff] possibility), the approximation schemes are doomed to complications for little gain. One of which, is the asymmetrical treatment of heat and sound. That is, even after you derive the heat and sound from the same underlying bulk motion of continuum mechanics, you still have to treat them separately, whereas quantum physics insists that they are _exactly_ the same thing, just different limits of the same _one_ term in any approximation scheme. It is the same thing with fluids. Very few physicists are dealing with Navier-Stokes equation itself, since it is now the preferred game of applied mathematicians. Instead of asking whether Navier-Stokes equations can have solutions for so-and-so kinds of problems, the physicists working on fluids tend to be working, instead, on the quantum corrections that should be added onto Navier-Stokes equations. After all, chaos sets in earlier than Navier-Stokes equations imply, because, near the critical points, modal behaviour is nowhere near the mean behaviour that we should have been focusing upon all this while. Sadly, this is so difficult that we have yet to do something fundamentally good about it. In that case, I am not saying that the corresponding classical problems are not important or not good at describing physical systems, but that the quantum world view is very different. And since the fundamental picture clearly needs to be quantum, I merely mean to say that those quantum considerations happen to be even more important than the classical problems. • Well, I got led on a merry chase and finally found what was causing the memory error. Turns out it was my fault all along… Rather than “the problem still persists” after the derivation, I would say that the problem ARISES in the derivation of transport equations from the fundamentals. The Boltzmann equation doesn’t have separate terms for heat, sound, bulk motion, viscous stress, etc., because it directly describes the molecular motion those things are emergent properties of. It’s not continuum mechanics either; it’s perfectly capable of describing rarefied gases and even free molecular flow. Of course quantum mechanics is a much better model than classical physics for condensed matter behaviour, and even some aspects of gas/plasma behaviour. I completely agree with you there. But I maintain that the specific criticism I was responding to, that of classical physics having an inherently fragmented picture of material mechanics, was not accurate, seemingly because of a mismatch in the fundamentality of the descriptions being compared. • Sorry, I don’t know why, the comment system won’t allow me to reply to yours. I see. That would be totally my ignorance, then. However, I would like to point out, to replace the original wrong argument, that the natural ideal gases that we are familiar with, are actually Fermi gases in the high temperature and low density limit. If that were not the case, we would run into what is known as the Gibb’s paradox, in which a classical gas, in the equations, somehow has a lot less pressure than expected. In particular, the ideal gas equation of pV = NkT, would miss out the N which is around 10^24. That makes no sense, until one realises that the quantum indistinguishability (which is basically quantum entanglements, really) needs to be taken into account. I hope that little bit, which basically states that, even for dilute gases in which we do not expect quantum effects to be important, turn out to be critically dependent upon quantum ideas nonetheless. Of course, the rest of the system does not require quantum corrections, and there is an easy fudge factor to fix that problem, but it does show how quantum theory is still a vital component of everyday life, not some esoteric correction that only people caring about precise effects can observe. (Which is the underlying point I really wanted to outline, although my choice of example turned out to be wrong.) Thanks. It seems, however, that it may be that the “classical atoms” view that is given by Boltzmann equation thus incorporates enough physics to reproduce the important things I was caring about. Interesting. • I hate to keep doing this, but… The Gibbs paradox has to do with the definition of entropy. If you don’t assume indistinguishability, you can toggle the entropy up and down by opening and closing a door between two identical reservoirs. You can get the correct pressure just fine with classical gas kinetics. But there are other things about gas dynamics that require quantum treatment. The temperature dependence of ideal-gas specific heat in multiatomic substances, for instance, is quite substantial and entirely due to the quantization of internal energy storage modes (at lower temperatures, there usually isn’t enough energy in a collision to excite these modes). • Or something like that – I had to look up Gibbs’ paradox, and I’m not completely sure my facile description above is right… • Nah, I know the classical gas kinetics can derive the pressure just fine. Why, indeed, I was just teaching my student that elementary derivation. But it does mean that both Boltzmann and Gibbs entropy cannot be derived from classical reasoning without the indistinguishability fudge factor. You would have to rely on Sackur–Tetrode entropy (removing all the quantum stuff and replacing them with an unknown constant, of course). It might not seem like a big issue at first glance, but it actually is. Other than the fact that entropy of mixing (that you were describing) had to be discontinuously and manually handled, it does also mean that stuff like phase changes go haywire. Again, that is useless to a fluid dynamicist until you want to deal with, say, ice-water mixtures or critical phenomena. Or worse, the theory is inconsistent. Judging by how seriously you take the mathematics, it is either screaming at you that you are doing something wrong, or that phenomenology needs to be used (by curve-fitting the unknown constant there, for one). Instead, what I wanted to impress upon you is that, instead of deriving the pressure from kinetic theory (actually, what a bad name! It is not a theory, nor does kinetic make sense as its modifier. Instead, classical atomic model would be its rightful name), it is possible to subsume the entirety of classical thermodynamics into the 2nd Law. That is, given the existence and some assumed properties of the entropy, you can construct everything you find in classical thermodynamics, even without statistical thermodynamics. That is, 0th Law and 1st Law, in particular, are theorems if you assume the 2nd Law to be your postulate. Actually, it is even a bit less — you assume parts of 2nd Law, and prove the full form of the 2nd Law with the assumptions. The issue I was referring to, is that, if you take this view, in which pressure is just a derivative of the entropy via Maxwell’s relations, and then you try to construct the statistical thermodynamics from it, you will run into Gibbs’ paradox. At the end, there is no need to worry about you dragging the conversation out. Actually, I was still waiting for some insights from you — you have already shown me wrong once, and there is no reason why you cannot teach me more. 5. I particularly like the statement: 6. Okay, physicist, most of the things in the video are not new to me, but good presentation. Commented, though, to point out that the “everything is named after Quantum” is an interesting recurring phenomenon in the USA. Perhaps the largest one was the use of “Radio” in naming things. Radio was the internet on steroids, the “tech stock” of the 1920s bubble. One of the most famous meaningless uses of Radio from the time was the little red wagon called a Radio Flyer. The company just put two hot buzz words together, and created a legendary product. 7. Pingback: The Webcomic Guide to Quantum Physics | Slackpile 8. Dear Jorge Cham, I enjoyed your cute animation. Since you said you were looking for ways to think about quantum mechanics, I thought the resource list below might be interesting. Please feel free to contact me with questions. David Liao One of my physics professors from Harvey Mudd College (half-hour east of Caltech) wrote a wonderful book on quantum mechanics for junior physics majors: John Townsend, A Modern Approach to Quantum Mechanics, University Press: Sausalito, CA (2000) (http://www.amazon.com/A-Modern-Approach-Quantum-Mechanics/dp/1891389785). The academic pedigree of this book comes through Sakurai’s Modern Quantum Mechanics. Get a hold of Professor David Mermin at Cornell. Tell him you are working with Caltech on this animation series, and ask him to walk you through his slides on Bell’s inequalities and the Einstein-Podolsky-Rosen paradox (http://www.lassp.cornell.edu/mermin/spooky-stanford.pdf). If you can meet with Sterl Phinney at Caltech, talk to him. He seems to know a lot about a lot, and he’s really fun to be around. Fundamental concepts: There is a variety of ways to introduce quantum mechanics. The following two flavors can be provide particularly satisfying insight: Path-integral formulation — A creative child can tell a bunch of different imaginary stories to explain how a particle got from situation A to another situation B during the course of a day. A mathematician can associate with each story a complex phasor. The phasors can be added (in a vector-like head-to-tail fashion) to obtain an overall complex number for getting from A to B, whose squared magnitude is the overall probability of getting from A to B. The concept of extremized action from classical mechanics (think of light taking the path of least time) is a limiting approximation of the quantum-mechanical path-integral formulation. For this brief description, I skipped a variety of details. This perspective is attributed to Richard Feynman. State vector, operators — An older, more traditional description of quantum mechanics centers around the state vector (often denoted |psi>). “All that can be described” about an entity of interest is hypothetically abstracted as a vector from a vector space of all possible descriptions that can be associated with the entity. It is hypothesized that the outcomes of measurements correspond to [real] eigenvalues of [Hermitian] operators that can act on the state vector, and that when it is appropriate to describe an entity using one single eigenstate of an operator, this means that observation corresponding that operator will without doubt yield the corresponding eigenvalue as the measured result. Note: State |psi> is *not* wavefunction psi(x). psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>. Risky vocabulary: It is important to be aware of verbal shortcuts that are used to make quantum seem more conceptually accessible in the short term that, unfortunately, also make quantum much more difficult to understand fundamentally in the long term: There is no motion in any energy eigenstate (ground state or otherwise). Words such as “vibration” and “zooming around” are only euphemistically associated with any *individual* energy eigenstate. As an example, the Born-Oppenheimer approximation for solving the time-independent Schrödinger equation by separating the electronic and nuclear degrees of freedom is often justified using a story that involves the phrase “the light electrons are whizzing around as the nuclei faster than the massive nuclei are slowly vibrating around their equilibrium positions.” This is shorthand for saying that the curvature term associated with the nuclear coordinates is ignored as the first term in a perturbative expansion because it is suppressed by the ratio of the nuclear mass, M, to the electron mass, m, (for details, http://www.math.vt.edu/people/hagedorn/). Even though the Heisenberg relationship is often described using phrases such as “not knowing how we disturbed a particle by looking at it,” a more fundamentally satisfying understanding is obtained by seeing that some operators don’t commute. Because some pairings of operators, such as position and momentum, don’t share eigenvectors, it is impossible for an entity to simultaneously be in an eigenvector for one operator, say, x position, while also being in an eigenvector for the other operator, in this example, x momentum. Having the momentum well defined (being in an eigenvector for momentum) corresponds to being unable to associate one particularly narrow range of position eigenvalues with the entity. This is essentially the Fourier cartoon you used in the animation (narrowness in space corresponds to less specificity in frequency/wavelength and vice versa). Beware of popular reports of the experimental observation of a wavefunction. Pull up the abstract from the underlying peer-reviewed manuscripts. I bet that the wavefunction has not been directly observed. Instead, the squared-magnitude (probability distribution) has been inferred from a large collection of individual experiments. As an example, a recent work inferring the nodal structure (radii where probability of finding electron around an atomic core vanishes) became popularized as direct observation of the wavefunction, which is not the claim in the original authors’ abstract. • Hi David, By and large, a good write-up. But, still… 1. A minor point: Did you miss something on the right-hand side of the equality sign? In any case, guess you could streamline the line a bit here. 2. A major point: “There is no motion in any energy eigenstate.” And, just, how do you know? [And, oh, BTW, you could expand this question to include any other eigenstates as well.] Anyway, nice to see your level of enthusiasm and interest for these *conceptual* matters as well. Coming from a physics PhD, it is only to be appreciated. • Thank you for your reply. Hope the following is helpful! 1) Thank you for catching the typo in the sentence, “psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” This sentence should, instead, read, “psi(x) is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” I don’t know how to edit my post to correct this sentence. 2) You asked how it is possible to know that there is no motion in an energy eigenstate. Below, I include two ways to respond. The abstruse response is an actual answer and points to the insight you are seeking. If you look closely, you will see that the graphical response is not an actual answer. Instead, it is a fun exercise for “feeling the intuition” that energy eigenstates do not have motion. Both responses are important (many physicists enjoy both casual “proofs” and fluffy intuition). Abstruse response: We argue that an object that is completely described by one energy eigenstate has no motion. An energy eigenstate is a solution to the time-INDEpendent Schrödinger equation. It’s very “boring.” The only thing that happens to it, according to the time-DEpendent Schrödinger equation is, a rotation of its overall complex phase. This phase does not appear in expectation values, and so all expectation values are constants with time. To obtain motion, it is necessary to have a superposition of more than one state corresponding to at least more than one energy eigenvalue. In such circumstances, at least some of the complex phases will rotate at different time frequencies, allowing *relative* phases between states in the superposition to change with time. I am not claiming that experimental systems that people abstract using energy eigenstates will never turn out, following additional research, to have any aspect of motion. I am saying that the *abstraction* of a single energy eigenstate itself (without reference to whether the abstraction corresponds to anything empirically familiar) is a conceptual structure that contains no concept of motion (save for the rotating overall phase factor). The mathematics described above are very similar to the mathematics that describe the propagation of waves in elastic media. A pure frequency standing wave always has the same shape (though it might be vertically scaled and upside down). A combination of standing waves of different frequencies does not always maintain the same shape. Graphical response: Go to http://crisco.seas.harvard.edu/projects/quantum.html and play with the simulator. Now set the applet to use a Harmonic potential, and try to sketch, using the “Function editor,” the ground state from http://en.wikipedia.org/wiki/File:HarmOsziFunktionen.png You might want to turn on the display of the Potential energy function to ensure an accurate width for the state you are sketching. Run the simulation. Notice that the function doesn’t move very much (or in the case that you sketched the ground state with perfect accuracy, it shouldn’t move at all). Now, sketch a different state that doesn’t look like any one of the energy eigenstates in the Wikipedia image above. This should generate motion (to some extent looking like a probability mound bouncing back and forth in the well). You can also look at the animations at http://en.wikipedia.org/wiki/Quantum_harmonic_oscillator and see that the energy eigenstate examples (panels C,D,E, and F) merely rotate in complex space (red and blue get exchanged with each other), but the overall spatial probability distribution is unchanged. 3) You asked whether one would assert absence of motion for other eigenstates. Not as a general blanket statement. The reason that energy eigenstates have no motion is that they are eigenstates, specifically, of the Hamiltonian. Yes, in some examples, it is possible for an eigenstate of another operator to have no motion (i.e. when that state is both an eigenstate of another operator, as well as of the Hamiltonian). • Cool. Your abstruse response really wasn’t so abstruse. But anyway, my point concering the quantum eigenstates was somewhat like this. To continue on the same classical mechanics example as you took, consider, for instance, a plucked guitar string. The pure frequency standing wave is “standing” only in a secondary sense—in the sense that the peaks are not moving along the length of the string. Yet, physically, the elements of the string *are* experiencing motion, and thus the string *is* in motion, whether you choose to view it as an up-and-down motion, or, applying a bit of mathematics, view it as a superposition of “leftward” and “rightward” moving waves. The issue with the eigenstates in QM is more complicated, only because of the Copenhagen/every other orthodoxy in the mainstream QM. The mainstream QM in principle looks down on the idea of any hidden variables—including those local hidden variables which still might be capable of violating the Bell inequalities. They are against the latter idea, in principle—even if the hidden variables aren’t meant to be “classical.” Leave aside a few foundations-related journal, the mainstream QM community, on the whole, refuses to seriously entertain any idea of any kind of a hidden variable—and that’s exactly the way in which the relativists threw the aether out of physics. … I was not only curious to see what your inclinations with respect to this issue are, but also to learn the specific points with which the mainstream QM community comes to view this particular manifestation of the underlying issue. In particular, do they (even if epistemologically only wrongly) cite any principle as they proceed to wipe out every form of motion out of the eigenstates, or is it just a dogma. (I do think that it is just a dogma.) Anyway, thanks for your detailed and neatly explanatory replies. … Allow me to come back to you also later in future, by personal email, infrequently, just to check out with you how you might present some complicated ideas esp. from QM. (It’s a personal project of mine to understand the mainstream QM really well, and to more fully develop a new approach for explaining the quantum phenomena.) • Ah, I see better where you are coming from. You are wondering what explanations someone might give for focusing on mainstream QM interpretations and de-emphasizing hidden variables perspectives. Off the top of my head, I can imagine what people might generally say. I can also rattle off a couple thoughts as to why my attention does not wander much into the world of hidden variables. Anticipated general responses (0) I imagine usual responses would refer to Occam’s Razor and/or the Church of the Flying Spaghetti Monster. People might say that Occam’s Razor (or something along the same lines) is a fundamental aesthetic aspect of the Western idea of “science.” I am not saying these references directly address the most logically reasoned versions of the concerns you might be raising. (0.1) I think some professional scientists are laid back conceptual cleanliness. It doesn’t bother them enough to “beat” the idea of motion in eigenstates out from students in QM. I know a couple professional scientists who are OK with letting students think that electrons are whizzing around molecules. Personal thoughts (1) I don’t necessarily “believe” mainstream QM in a religious sense, but it feels natural (for my psychology). My gut feelings of certainty about existence of things somewhat vanish unless I am directly looking at them, touching them, and concentrating with my mind to force them “into existence” through brutal attention. People like to sensationalize mainstream QM by saying that it has counterintuitive indeterminacy. At the end of the day, what offends one person’s intuitions can be instinctively natural for someone else. I hear that mainstream QM is also “OK” for people who hold Eastern belief systems (I’m atheistish, so I don’t personally know). (2) Mainstream QM has a particular pedagogical value. It offers an exercise in making reasoned deductions while resisting the urge to rely on (some) inborn intellectual instincts. I think it’s good for learning that we sometimes confuse [1] the subjective experience of *projecting* a well-defined, deterministic mental image of the dynamics of a system onto a mental blank stage representing reality with, instead, [2] the supposed process of directly perceiving and “being unified with” reality. Yes, philosophy courses can be valuable too, but in physics you can also learn to calculate the photospectra* of atoms and describe the properties of semiconductors and electronic consumer goods. * Surprisingly difficult to do in a fully QM treatment at the undergraduate level. Perturbing the atom with a classical oscillating electric field is *not* kosher. It’s much more satisfying to quantize the EM field. Does any of this mean that mainstream QM is true? No. No scientific theory is ever “true” (quotation marks refer to mock hippee existential gesture). David Liao P.S. I am happy to share my email address with you–how do I do that? Does this commenting platform share my address (sorry, not used to this system)? • Hi David, 1. Re. Hidden variables. Philosophically, I believe in “hidden variables” to the same extent (i.e. to the 100% extent) and for the same basic reason that I believe that a trrain continues to exist after it enters a tunnel and before it emerges out of the same. Lady Diana *could* suffer an accident inside a tunnel, you know… (I mean, she would have continued to exist even after entering that tunnel—whether observed by those paparazzis or not. That is, per my philosophical beliefs…) Physics-wise, I (mostly) care for only those hidden variables which appear in *my* (fledgling) approach to QM (which I have still to develop to the extent that I could publish some additional papers). I mostly don’t care for hidden variables of any other specifically physics kinds. Mostly. Out of the limitations of time at hand. 2. Oh yes, (IMO) electrons do actually whiz around. Each of them theoretically can do so anywhere in the universe, but practically speaking, each whizzes mostly around its “home” nucleus. 3. About mysticism: Check out J.M. Marin (DOI: 10.1088/0143-0807/30/4/014). Mysticism was alive and kicking in the *Western* culture even at a time that Fritjof Capra was not even born. The East could probably lay claim to the earliest and also a very highly mature development of mysticism, but then, I (honestly) am not sure to whom should go the credit for its fullest possible development: to the ancient mystics of India, or to Immanuel Kant in the West. I am inclined to believe that at least in terms of rigour, Kant definitely beat the Eastern mystics. And that, therefore, his might be taken as the fullest possible development. Accordingly, between the two, I am inclined to despise Kant even more. 4. About my email ID. This should be human readable (no dollars, brackets, braces, spaces, etc.): a j 1 7 5 t p $AT$ ya h oo [DOT} co >DOT< in . Thanks. • Entering this comment for the third time now (and so removing bquote tags)–ARJ Hi David, 1. A minor point: 2. A major point: >> “There is no motion in any energy eigenstate.” And, just how do you know? 9. Great idea for doing this. Just a hint for getting more non physicists involved: talk at least half as fast as you do, people need time to absorbe and self explain, othewise no matter how simple it is, they lose you at the beginning. 10. Pingback: Quantum Matter Animated! | Astronomy physics an... 11. Pingback: Quantum Frontiers and Tuba! | Creative Science • Mankei, Interesting. You seem to be having been fun thinking about this field for quite some time. Anyway, here is a couple of questions for you (and for others from this field): (i) Is it possible to make a mechanical oscillator/beam detectably interact with single photons at a time (i.e. statistically very high chance of only one photon at a time in the system)? [For instance, an oscillator consisting of the tip of a small triangle protruding out of a single layers of atoms as in a graphene sheet? … I am just guessing wildly for a possible and suitable oscillator here.] Note, for single photons, it won’t be an _oscillator_ in the usual sense of the term. However, any mechanical device that mechanically responds (i.e. bends), would be enough.] (ii) If such a mechanical device (say an oscillator) is taken “to” “0” K, does/would/will it continue to show the red/blue asymmetrical behavior? [Esp. for Mankei] What do you expect? • (i) In theory it’s possible, there have been a few recent theoretical papers on “single-photon optomechanics” that explore what would happen, but experimentally it’s probably very, very hard. Current experiments of this sort use laser beams with ~1e15 photons per second. (i) I have no idea what would happen then, because my math and my intuition always assume the laser beam to be very strong. Other people might be able to answer you better. • Hi Mankei, 1. Thanks for supplying what obviously is a very efficient search string. (The ones I tried weren’t even half as efficient!) … Very interesting results! 2. Other people: Assuming that the gradual emergence of the red-blue asymmetry with the decreasing temperatures (near the absolute zero) continues to be shown even as the *light flux* is reduced down to (say) the single-photon levels, then, how might Mankei’s current model/maths be reconciled with that (as of now hypothetical) observation? I thought of the single-photon version essentially only in order to remove the idea of “noise” entirely out of the picture. If there is no possibility of any noise at all, and *if* the asymmetry is still observed, wouldn’t it form a sufficient evidence to demonstrate the large-scale *quantum* nature of the mechanical oscillator (including the possibilities of a transfer of a quantum state to a large-scale device)? Or would there still remain some source of a doubt? • Hi Mankei, We also thought about the issue you brought up in arxiv:1306.2699. See, for instance, a recent paper we published with Yanbei Chen and Farid Khalili (http://pra.aps.org/abstract/PRA/v86/i3/e033840). I would consider that our experiment measured both the sum AND difference of the red and blue sideband powers. The DIFFERENCE is indeed, as shown in your arxiv post mentioned above, due to the quantum noise of the light field measuring the mechanics. The noise power of the mechanics is in the SUM of the red and blue sidebands. Our experimental data was plotted as the ratio of the red and blue sidebands, which depends upon both the sum and difference of the sidebands powers, and looks very much different than what would be expected even for a semi-classical picture in which the light is quantized and the motion not. • I guess we’ve already exchanged emails and come to a consensus, but just to recap, I agree that, through your calibrations, you’ve inferred zero-point mechanical motion and your result is consistent with quantum theory. The word “quantum” of course literally means something discrete and one could argue you haven’t observed “quantum” motion yet, but that’d be nitpicking. • And to clarify, the asymmetry itself is not proof of zero-point mechanical motion or anything quantum. The mechanical energy was obtained from the SUM of the sidebands (as Oskar said), and the asymmetry was used as a *calibration* to compare the mechanical energy with the optical vacuum noise. • So me and my boyfriend are going on a two hour car ride together in a few wes03&#82ek; I know that’s not a a long time, but I still feel like we’ll need some conversation material on the ride. Last time we took the ride, there were a few awkward silences and I just want to make sure that for most of it, we have something to chat about. Are there any car games that you guys know of that force people to talk?. • Hi Mankei, Thanks for your response. There are two main claims in your manuscript, 1) centers around the interpretation of our result, 2) is a strong claim about classical stochastic processes being the source of our observed asymmetry. In response to 1), the different interpretations of the result (and in particular, the relation between the optical vacuum noise and the zero-point motion) have been considered previously in great depth by our colleagues at IQIM (Haixing Miao and Yanbei Chen) and in Russia (Farid Khalili). I would like to point you to this paper: http://pra.aps.org/abstract/PRA/v86/i3/e033840. In response to 2), you claim to “show that a classical stochastic model, without any reference to quantum mechanics, can also reproduce this asymmetry”. We also consider this possibility in a follow-up paper which came out last year (http://arxiv.org/abs/1210.2671), where we show a derivation exactly analogous to what you’ve shown, and then go to great lengths to experimentally rule out classical noise as the source of asymmetry (by varying the probe power and showing that the asymmetry doesn’t change, and by carefully characterizing the properties of our lasers). More generally, there are fundamental limits as to what can be claimed regarding `quantum-ness’ in any measurement involving only measurements of Gaussian noise. To date there have been 5 measurements of quantum effects in the field of optomechanics, our paper being the first one (the others are Brahms PRL 2012, Brooks Nature 2012, Purdy Science 2013, and Safavi-Naeini Nature 2013 (in press)). Unfortunately, all of these measurements are based on continuous measurement of Gaussian noise. There are several groups working hard on observing stronger quantum effects (as O’Connell Nature 2010 did in an circuit QED system), but we are still some months away from that. Best, Amir • Actually, I’d like to make that 6 papers – last week Cindy Regal’s group released this beautiful paper on arXiv: http://arxiv.org/abs/1306.1268. Here as well, the `quantum-ness’ can only be inferred after careful calibration of the classical noise in the system, since the measurement is based on continuous measurement of Gaussian noise. • Actually I’d like to make that 7 papers – I forgot about the result from 2008 from Dan Stamper-Kurn’s group: Murch, et al. Nature Physics, 4, 561 (2008). • hi sonni,thaaks for sharing this recipe..i have never imagine i could make this dish one day until just now i decided to give it a try..oh my god, it taste fabulous!! going to cook this again and again.. 12. Pingback: Quantum Matter Animated! | Space & Time | S... 13. Pingback: Quantum Matter Animated! | Far Out News | Scoop.it 14. Pingback: Quantum Theory and Buddhism | Talesfromthelou's Blog 15. I get very annoyed whenever somebody uses the phrases “quantum jump” or “quantum leap” to imply a BIG change in some domain (such as “our new Thangomizer represents a quantum jump in Yoyodyne’s capabilities). A quantum jump is the SMALLEST POSSIBLE state change in quantum mechanics, so when somebody claims their product represents a “quantum leap,” I mentally translate that as “smallest possible degree of incremental improvement over their previous product!” 16. Pingback: My comments at other blogs—part 1 | Ajit Jadhav's Weblog 17. Is it that higher red shift and lower blue shift indicates constant shrinking of the mirror? If that is true then do we expect red shift to die down say we keep the mirror at 0K for long enough time? 18. Pingback: Squeezing light using mechanical motion | Quantum Frontiers 19. Pingback: The Most Awesome Animation About Quantum Computers You Will Ever See | Quantum Frontiers 20. Pingback: Hacking nature: loopholes in the laws of physics | Quantum Frontiers 21. Pingback: Human consciousness is simply a state of matter, like a solid or liquid – but quantum | Tucson Pool Saz: Tech - Gaming - News 22. Pingback: This Video Of Scientists Splitting An Electron Will Shock You | Quantum Frontiers 23. No. No, this shall not stand. Have you no heart, sir? You have a family now, as do I. You simply cannot go around throwing the videogaming equivalent of heneri-lacod crack at folks.Shame! SHAME! Leave a Reply to Roy Pettis Cancel reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
2c3aa17957001a5c
From Wikipedia, the free encyclopedia Jump to navigation Jump to search Stylised atom with three Bohr model orbits and stylised nucleus.png Lithium atom model Showing nucleus with four neutrons (blue), three protons (red) and, orbited by three electrons (black). Smallest recognised division of a chemical element Mass: 1.66 × 10(−27) to 4.52 × 10(−25) kg Electric charge: zero The atom is the basic unit of matter. It is the smallest thing that can have a chemical property. There are many different types of atoms, each with its own name, atomic mass and size. These different atoms are called chemical elements. The chemical elements are organized on the periodic table. Examples of elements are hydrogen and gold. Atoms are very small, but the exact size depends on the element. Atoms range from 0.1 to 0.5 nanometers in width.[1] One nanometer is about 100,000 times smaller than the width of a human hair.[2] This makes atoms impossible to see without special tools. Scientists use experiments to learn how they work and interact with other atoms. Atoms join together to make molecules: for example, two hydrogen atoms and one oxygen atom combine to make a water molecule. When atoms join together it is called a chemical reaction. Atoms are made up of three kinds of smaller particles, called protons (which are positively charged), neutrons (which have no charge) and electrons (which are negatively charged). The protons and neutrons are heavier, and stay in the middle of the atom. They are called the nucleus. They are surrounded by a cloud of electrons which are very lightweight. They are attracted to the positive charge of the nucleus by the electromagnetic force. The number of protons and electrons an atom has tells us what element it is. Hydrogen, for example, has one proton and one electron; the element sulfur has 16 protons and 16 electrons. The number of protons is the atomic number. Except for hydrogen, the nucleus also has neutrons. The number of protons and neutrons together is the atomic weight. Atoms move faster when they are in their gas form (because they are free to move) than they do in liquid form and solid matter. In solid materials, the atoms are tightly packed next to each other so they vibrate, but are not able to move (there is no room) as atoms in liquids do. History[change | change source] The word "atom" comes from the Greek (ἀτόμος) "atomos", indivisible, from (ἀ)-, not, and τόμος, a cut. The first historical mention of the word atom came from works by the Greek philosopher Democritus, around 400 BC.[3] Atomic theory stayed as a mostly philosophical subject, with not much actual scientific investigation or study, until the development of chemistry in the 1650s. In 1777 French chemist Antoine Lavoisier defined the term element for the first time. He said that an element was any basic substance that could not be broken down into other substances by the methods of chemistry. Any substance that could be broken down was a compound.[4] In 1803, English philosopher John Dalton suggested that elements were tiny, solid balls made of atoms. Dalton believed that all atoms of the same element have the same mass. He said that compounds are formed when atoms of more than one element combine. According to Dalton, in a certain compound, the atoms of the compound's elements always combine the same way. In 1827, British scientist Robert Brown looked at pollen grains in water under his microscope. The pollen grains appeared to be jiggling. Brown used Dalton's atomic theory to describe patterns in the way they moved. This was called brownian motion. In 1905 Albert Einstein used mathematics to prove that the seemingly random movements were caused by the reactions of atoms, and by doing this he conclusively proved the existence of the atom.[5] In 1869 scientist Dmitri Mendeleev published the first version of the periodic table. The periodic table groups elements by their atomic number (how many protons they have. This is usually the same as the number of electrons). Elements in the same column, or period, usually have similar properties. For example, helium, neon, argon, krypton and xenon are all in the same column and have very similar properties. All these elements are gases that have no colour and no smell. Also, they are unable to combine with other atoms to form compounds. Together they are known as the noble gases.[4] The physicist J.J. Thomson was the first person to discover electrons. This happened while he was working with cathode rays in 1897. He realized they had a negative charge, unlike protons (positive) and neutrons (no charge). Thomson created the plum pudding model, which stated that an atom was like plum pudding: the dried fruit (electrons) were stuck in a mass of pudding (protons). In 1909, a scientist named Ernest Rutherford used the Geiger–Marsden experiment to prove that most of an atom is in a very small space called the atomic nucleus. Rutherford took a photo plate and covered it with gold foil, and then shot alpha particles (made of two protons and two neutrons stuck together) at it. Many of the particles went through the gold foil, which proved that atoms are mostly empty space. Electrons are so small they make up only 1% of an atom's mass.[6] Ernest Rutherford In 1913, Niels Bohr introduced the Bohr model. This model showed that electrons travel around the nucleus in fixed circular orbits. This was more accurate than the Rutherford model. However, it was still not completely right. Improvements to the Bohr model have been made since it was first introduced. In 1925, chemist Frederick Soddy found that some elements in the periodic table had more than one kind of atom.[7] For example, any atom with 2 protons should be a helium atom. Usually, a helium nucleus also contains two neutrons. However, some helium atoms have only one neutron. This means they truly are helium, because an element is defined by the number of protons, but they are not normal helium, either. Soddy called an atom like this, with a different number of neutrons, an isotope. To get the name of the isotope we look at how many protons and neutrons it has in its nucleus and add this to the name of the element. So a helium atom with two protons and one neutron is called helium-3, and a carbon atom with six protons and six neutrons is called carbon-12. However, when he developed his theory Soddy could not be certain neutrons actually existed. To prove they were real, physicist James Chadwick and a team of others created the mass spectrometer.[8] The mass spectrometer actually measures the mass and weight of individual atoms. By doing this Chadwick proved that to account for all the weight of the atom, neutrons must exist. In 1937, German chemist Otto Hahn became the first person to create nuclear fission in a laboratory. He discovered this by chance when he was shooting neutrons at a uranium atom, hoping to create a new isotope.[9] However, he noticed that instead of a new isotope the uranium simply changed into a barium atom, a smaller atom than uranium. Apparently, Hahn had "broken" the uranium atom. This was the world's first recorded nuclear fission reaction. This discovery eventually led to the creation of the atomic bomb. Further into the 20th century, physicists went deeper into the mysteries of the atom. Using particle accelerators they discovered that protons and neutrons were actually made of other particles, called quarks. The most accurate model so far comes from the Schrödinger equation. Schrödinger realized that the electrons exist in a cloud around the nucleus, called the electron cloud. In the electron cloud, it is impossible to know exactly where electrons are. The Schrödinger equation is used to find out where an electron is likely to be. This area is called the electron's orbital. Structure and parts[change | change source] Parts[change | change source] A helium atom, with the nucleus shown in red (and enlarged), embeded in a cloud of electrons. The complex atom is made up of three main particles; the proton, the neutron and the electron. The isotope of Hydrogen Hydrogen-1 has no neutrons, just the one proton and one electron. A positive hydrogen ion has no electrons, just the one proton and one neutron. These two examples are the only known exceptions to the rule that all other atoms have at least one proton, one neutron and one electron each. Electrons are by far the smallest of the three atomic particles, their mass and size is too small to be measured using current technology.[10] They have a negative charge. Protons and neutrons are of similar size and weight to each other,[10] protons are positively charged and neutrons have no charge. Most atoms have a neutral charge; because the number of protons (positive) and electrons (negative) are the same, the charges balance out to zero. However, in ions (different number of electrons) this is not always the case, and they can have a positive or a negative charge. Protons and neutrons are made out of quarks, of two types; up quarks and down quarks. A proton is made of two up quarks and one down quark and a neutron is made of two down quarks and one up quark. Nucleus[change | change source] The nucleus is in the middle of an atom. It is made up of protons and neutrons. Usually in nature, two things with the same charge repel or shoot away from each other. So for a long time it was a mystery to scientists how the positively charged protons in the nucleus stayed together. They solved this by finding a particle called a gluon. Its name comes from the word glue as gluons act like atomic glue, sticking the protons together using the strong nuclear force. It is this force which also holds the quarks together that make up the protons and neutrons. A diagram showing the main difficulty in nuclear fusion, the fact that protons, which have positive charges, repel each other when forced together. The number of neutrons in relation to protons defines whether the nucleus is stable or goes through radioactive decay. When there are too many neutrons or protons, the atom tries to make the numbers the same by getting rid of the extra particles. It does this by emitting radiation in the form of alpha, beta or gamma decay.[11] Nuclei can change through other means too. Nuclear fission is when the nucleus splits into two smaller nuclei, releasing a lot of stored energy. This release of energy is what makes nuclear fission useful for making bombs and electricity, in the form of nuclear power. The other way nuclei can change is through nuclear fusion, when two nuclei join together, or fuse, to make a heavier nucleus. This process requires extreme amounts of energy in order to overcome the electrostatic repulsion between the protons, as they have the same charge. Such high energies are most common in stars like our Sun, which fuses hydrogen for fuel. Electrons[change | change source] Electrons orbit, or travel around, the nucleus. They are called the atom's electron cloud. They are attracted towards the nucleus because of the electromagnetic force. Electrons have a negative charge and the nucleus always has a positive charge, so they attract each other. Around the nucleus, some electrons are further out than others, in different layers. These are called electron shells. In most atoms the first shell has two electrons, and all after that have eight. Exceptions are rare, but they do happen and are difficult to predict.[12] The further away the electron is from the nucleus, the weaker the pull of the nucleus on it. This is why bigger atoms, with more electrons, react more easily with other atoms. The electromagnetism of the nucleus is not strong enough to hold onto their electrons and atoms lose electrons to the strong attraction of smaller atoms.[13] Radioactive decay[change | change source] Some elements, and many isotopes, have what is called an unstable nucleus. This means the nucleus is either too big to hold itself together[14] or has too many protons or neutrons. When this happens the nucleus has to get rid of the excess mass or particles. It does this through radiation. An atom that does this can be called radioactive. Unstable atoms continue to be radioactive until they lose enough mass/particles that they become stable. All atoms above atomic number 82 (82 protons, lead) are radioactive.[14] There are three main types of radioactive decay; alpha, beta and gamma.[15] • Alpha decay is when the atom shoots out a particle having two protons and two neutrons. This is essentially a helium nucleus. The result is an element with atomic number two less than before. So for example if a beryllium atom (atomic number 4) went through alpha decay it would become helium (atomic number 2). Alpha decay happens when an atom is too big and needs to get rid of some mass. • Beta decay is when a neutron turns into a proton or a proton turns into a neutron. In the first case the atom shoots out an electron. In the second case it is a positron (like an electron but with a positive charge). The end result is an element with one higher or one lower atomic number than before. Beta decay happens when an atom has either too many protons, or too many neutrons. • Gamma decay is when an atom shoots out a gamma ray, or wave. It happens when there is a change in the energy of the nucleus. This is usually after a nucleus has already gone through alpha or beta decay. There is no change in the mass, or atomic number or the atom, only in the stored energy inside the nucleus. Every radioactive element or isotope has what is named a half-life. This is how long it takes half of any sample of atoms of that type to decay until they become a different stable isotope or element.[16] Large atoms, or isotopes with a big difference between the number of protons and neutrons will therefore have a long half life, because they must lose more neutrons to become stable. Marie Curie discovered the first form of radiation. She found the element and named it radium. She was also the first female recipient of the Nobel Prize. Frederick Soddy conducted an experiment to observe what happens as radium decays. He placed a sample in a light bulb and waited for it to decay. Suddenly, helium (containing 2 protons and 2 neutrons) appeared in the bulb, and from this experiment he discovered this type of radiation has a positive charge. James Chadwick discovered the neutron, by observing decay products of different types of radioactive isotopes. Chadwick noticed that the atomic number of the elements was lower than the total atomic mass of the atom. He concluded that electrons could not be the cause of the extra mass because they barely have mass. Enrico Fermi, used the neutrons to shoot them at uranium. He discovered that uranium decayed a lot faster than usual and produced a lot of alpha and beta particles. He also believed that uranium got changed into a new element he named hesperium. Otto Hanh and Fritz Strassmann repeated Fermi's experiment to see if the new element hesperium was actually created. They discovered two new things Fermi did not observe. By using a lot of neutrons the nucleus of the atom would split, producing a lot of heat energy. Also the fission products of uranium were already discovered: thorium, palladium, radium, radon and lead. Fermi then noticed that the fission of one uranium atom shot off more neutrons, which then split other atoms, creating chain reactions. He realised that this process is called nuclear fission and could create huge amounts of heat energy. That very discovery of Fermi's led to the development of the first nuclear bomb code-named 'Trinity'. References[change | change source] 1. "Size of an Atom". 2. "Diameter of a Human Hair". 3. "History of Atomic Theory". 4. 4.0 4.1 "A Brief History of the Atom". 5. "Brownian motion - a history". 6. "Ernest Rutherford on Nuclear spin and Alpha Particle interaction" (PDF). 7. "Frederick Soddy, the Nobel Prize in chemistry: 1921". 8. "James Chadwick: The Nobel Prize in Physics 1935, a lecture on the Neutron and its properties". 9. "Otto Hahn, Liese Meitner and Fritz Strassman". 10. 10.0 10.1 "Particle Physics - Structure of a Matter". 11. "How does radioactive decay work?". 12. "Chemtutor on atomic structure". 13. "Chemical reactivity". 14. 14.0 14.1 "Radioactivity". 15. "S-Cool: Types of radiation". 16. "What is half-life?". Other websites[change | change source]
2a7093cb1e0220fc
Wednesday, November 30, 2016 Do you already believe in emergent gravity? Popular writer Sabine Hossenfelder gave a highly authoritative explanation for what emergent gravity is (see this). Actually, she started by bravely expanding the notion of emergence: not only gravitation but also free will, cell, and brain emerge. You are fundamentally just a lot of fundamental particles. Get over it! I can only admire Bee's intuitive powers: not a single argument for why this would be the case was needed. Only the great (and somewhat aggressive) insight transcending over the boundaries of sciences. But why this strange emotionality about free will, life and consciousness not typical for scientist? As an outsider I can only try to guess the reasons. The idea of emergence of gravitation is the newest fashion in the long sequence of fashions that has plagued theoretical physics during more than four decades. GUTs, supergravity, loop gravity, super string models, M-theory and its descendents, multiverse, AdS/CFT, reduction of physics to that of blackholes... Now it is fashionable to believe that Einstein was wrong: gravitation has a non-geometric origin: gravity as entropic force, emergence of gravition and 3-space and even space-time. Verlinde argues that the origin is thermodynamical. That it cannot be became clear already for 6 years ago experimentally. I have written about this in more detail in previous blog posting. Gravitational potential appears in the Schrödinger equation of neutron: it should not if gravitational potential is a thermodynamical quantity: thermodynamical quantities should not appear in quantal equations since they are derived from the statistical predictions of quantum theory. This elementary fact was noticed by Kobakidzhe. For some funny reason, this simple observation has not got through and it is probably too late now: during next years entropic gravity will produce a lot of stuff in archives for the future sociologists of science. We are living post-truth period and theoretical physics has been the forerunner in this respect. Bee mentions as an example about emergent gravity the model of Xiao-Gang Wen and collaborators. As usual, the model turned out to be a disappointment. Space-time emerges from space-time as it does also in other models in the best tradition of circular logic. One replaces space-time with a lattice keeping the 4-dimensionality: assigns finite-D Hilbert space at points of this 4-D lattice: essentially a discretization of quantum field theory is in question. One constructs Hamiltonian as sum of local Hamiltonians for a symmetric tensor field in such a manner that one obtains Einstein's equations in lowest order as continuum limit. Why I am not happy with this? One of course should not have any lattice assumed to have structure of 4-D lattice. One should have no tensor fields. One should have only Hilbert space. One however starts from fields in continuous space-time, discretizes, it and makes continuous again! I have always wondered why these naive mathematically primitive tricks familiar already from loop gravity. Superstring theories were not physically correct simple because the dimension of fundamental objects was too small (1 instead of 3) and this actually led to the idea that space-time energes: either by compactication or as 3-brane or as it seems as both;-). String model was however based on refined mathematics. I can only imagine the pains suffered by Witten as he sees this intellectual degeneration of theoretical physics. I have tried to explain that discretization occurs naturally due to the finite measurement resolution for both sensory experience and cognition. This however requires that consciousness and cognition are something which does not reduce to dynamics of particles. This leads to a notion of manifold involving naturally both discretization in terms of algebraic extensions of rationals and continuum aspects and also fusion of various number fields so that one can speak about adelic space-time - already Leibniz dreamed about this as he talked about monads. Most importanly, in this framework discretization does not lead to a loss of fundamental space-time symmetries: this is what killed loop gravity. Both the symmetries of special relativity and general coordinate invariance are exact and new infinite-dimensional symmetry algebras - in particular huge extension of conformal symmetries, are predicted. I have also talked about emergence: very many things emerge in TGD. Elementary bosons and actually also elementary fermions emerge from induced spinor fields and topology of wormhole contact pairs. Standard model and general relativity emerge as approximation to many-sheeted space-time having most important application to biology, neuroscience, and consciousness. These are definitely not emergent for point like particles! Generalizations of the usual positive energy ontology to zero energy ontology and of quantum measurement theory are needed. Classical gauge fields and gravitational fields at the level of single space-time sheet emerge from the dynamical geometry of space-time as a 4-D surface. The outcome is ridiculously simple: by general coordinate invariance there are only 4 fundamental field like degrees of freedom: for instance CP2 coordinates at macroscopic limit. Gravitational field of GRT and gauge fields of standrad model emerge as the sheets of the many-sheeted space-time are lumped together and the gauge potentials and deviation of metric from Minkowski metric sum up to gauge potentials and gravitational field of GRT. Space-time does not however emerge! Only the conscious experience about 3-space - proprioception - emerges through tensor nets formed by magnetic flux tubes meeting at nodes defined by 3-surfaces. How rapid the progress in physics would be if colleagues could finally accept that also the conscious observer must be understood physically. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Tuesday, November 29, 2016 Mersenne integers and brain I received a link to an interesting the article "Brain Computation Is Organized via Power-of-Two-Based Permutation Logic" by Kun Xie et al in Frontiers in Systems Neuroscience (see this). The proposed model is about how brain classifies neuronal inputs and suggests that the classification is based on Boolean algebra represents as subsets of n-element set for n inputs. The following represents my attempt to understand the model of the article. 1. One can consider a situation in which one has n inputs identifiable as bits: bit could correspond to neuron firing or not. The question is however to classify various input combinations. The obvious criterion is how many bits are equal to 1 (corresponding neuron fires). The input combinations in the same class have same number of firing neurons and the number of subsets with k elements is given by the binomial coefficient B(n,k)= n!/k!(n-k)!. There are clearly n-1 different classes in the classification since no neurons firing is not a possible observation. The conceptualization would tell how many neurons fire but would not specify which of them. 2. To represent these bit combinations one needs 2n-1 neuron groups acting as unit representing one particular firing combination. These subsets with k elements would be mapped to neuron cliques with k firing neutrons. For given input individual firing neurons (k=1) would represent features, lowest level information. The n cliques with k=2 neurons would represent a more general classification of input. One obtains Mn=2n-1 combinations of firing neurons since the situations in which no neurons are firing is not counted as an input. 3. If all neurons are firing then all the however level cliques are also activated. Set theoretically the subsets of set partially ordered by the number of elements form an inclusion hierarchy, which in Boolean algebra corresponds to the hierarchy of implications in opposite direction. The clique with all neurons firing correspond to the most general statement implying all the lower level statements. At k:th level of hierarchy the statements are inconsistent so that one has B(n,k) disjoint classes. The Mn=2n-1 (Mersenne number) labelling the algorithm is more than familiar to me. 1. For instance, electron's p-adic prime corresponds to Mersenne prime M127 =2127-1, the largest not completely super-astrophysical Mersenne prime for which the mass of particle would be extremely small. Hadron physics corresponds to M107 and M89 to weak bosons and possible scaled up variant of hadron physics with mass scale scaled up by a factor 512 (=2(107-89)/2). Also Gaussian Mersennes seem to be physically important: for instance, muon and also nuclear physics corresponds to MG,n= (1+i)n-1, n=113. 2. In biology the Mersenne prime M7= 27-1 is especially interesting. The number of statements in Boolean algebra of 7 bits is 128 and the number of statements that are consistent with given atomic statement (one bit fixed) is 26= 64. This is the number of genetic codons which suggests that the letters of code represent 2 bits. As a matter of fact, the so called Combinatorial Hierarchy M(n)= MM(n-1) consists of Mersenne primes n=3,7,127, 2127-1 and would have an interpretation as a hierarchy of statements about statements about ... It is now known whether the hierarchy continues beyond M127 and what it means if it does not continue. One can ask whether M127 defines a higher level code - memetic code as I have called it - and realizable in terms of DNA codon sequences of 21 codons (see this). 3. The Gaussian Mersennes MG,n n=151,157,163,167, can be regarded as a number theoretical miracles since the these primes are so near to each other. They correspond to p-adic length scales varying between cell membrane thickness 10 nm and cell nucleus size 2.5 μm and should be of fundamental importance in biology. I have proposed that p-adically scaled down variants of hadron physics and perhaps also weak interaction physics are associated with them. I have made attempts to understand why Mersenne primes Mn and more generally primes near powers of 2 seem to be so important physically in TGD Universe. 1. The states formed from n fermions form a Boolean algebra with 2n elements, but one of the elements is vacuum state and could be argued to be non-realizable. Hence Mersenne number Mn=2n-1. The realization as algebra of subsets contains empty set, which is also physically non-realizable. Mersenne primes are especially interesting as sine the reduction of statements to prime nearest to Mn corresponds to the number Mn-1 of physically representable Boolean 2. Quantum information theory suggests itself as explanation for the importance of Mersenne primes since Mn would correspond the number of physically representable Boolean statements of a Boolean algebra with n-elements. The prime p≤ Mn could represent the number of elements of Boolean algebra representable p-adically (see this). 3. In TGD Fermion Fock states basis has interpretation as elements of quantum Boolean algebra and fermionic zero energy states in ZEO expressible as superpositions of pairs of states with same net fermion numbers can be interpreted as logical implications. WCW spinor structure would define quantum Boolean logic as "square root of Kähler geometry". This Boolean algebra would be infinite-dimensional and the above classification for the abstractness of concept by the number of elements in subset would correspond to similar classification by fermion number. One could say that bosonic degrees of freedom (the geometry of 3-surfaces) represent sensory world and spinor structure (many-fermion states) represent that logical thought in quantum sense. 4. Fermion number conservation would seem to represent an obstacle but in ZEO it can circumvented since zero energy states can be superpositions of pair of states with opposite fermion number F at opposite boundaries of causal diamond (CD) in such a manner that F varies. In state function reduction however localization to single value of F is expected to happen usually. If superconductors carry coherent states of Cooper pairs, fermion number for them is ill defined and this makes sense in ZEO but not in standard ontology unless one gives up the super-selection rule that fermion number of quantum states is well-defined. One can of course ask whether primes n defining Mersenne primes (see this) could define preferred numbers of inputs for subsystems of neurons. This would predict n=2, 3, 5, 7, 13, 17, 19, 31, 67, 127, 257,.. define favoured numbers of inputs. n=127 would correspond to memetic code. See the article Why Mersenne Primes Are So Special? or the chapter Unified Number Theoretical Vision of "Physics as Generalized Number Theory". For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Sunday, November 27, 2016 Does the presence of cosmological constant term make Kähler coupling strength a genuine coupling constant classically? The addition of the volume term to Kähler action has very nice interpretation as a generalization of equations of motion for a world-line extended to a 4-D space-time surface. The field equations generalize in the same manner for 3-D light-like surfaces at which the signature of the induced metric changes from Minkowskian to Euclidian, for 2-D string world sheets, and for their 1-D boundaries defining world lines at the light-like 3-surfaces. For 3-D light-like surfaces the volume term is absent. Either light-like 3-surface is freely choosable in which case one would have Kac-Moody symmetry as gauge symmetry or that the extremal property for Chern-Simons term fixes the gauge. The known non-vacuum extremals are minimal surface extremals of Kähler action and it might well be that the preferred extremal property realizing SH quite generally demands this. The addition of the volume term could however make Kähler coupling strength a manifest coupling parameter also classically when the phases of Λ and αK are same. Therefore quantum criticality for Λ and αK would have a precise local meaning also classically in the interior of space-time surface. The equations of motion for a world line of U(1) charged particle would generalize to field equations for a "world line" of 3-D extended particle. This is an attractive idea consistent with standard wisdom but one can invent strong objections against it in TGD framework. 1. All known non-vacuum extremals of Kähler action are minimal surfaces and the minimal surface vacuum extremals of Kähler action become non-vacuum extremals. This suggest that preferred extremals are minimal surface extremals of Kähler action so that the two dynamics apparently decouple. Minimal surface extremals are analogs for geodesics in the case of point-like particles: one might say that one has only gravitational interaction. This conforms with SH stating that gauge interactions at boundaries (orbits of partonic 2-surfaces and 2-surfaces at the ends of CD) correspond classically to the gravitational dynamics in the space-time interior. Note that at the boundaries of the string world sheets at light-like 3-surfaces the situation is different: one has equations of motion for geodesic line coupled to induce Kähler gauge potential and gauge coupling indeed appears classically as one might expect! For string world sheets one has only the topological magnetic flux term and minimal surface equation in string world sheet. Magnetic flux term gives the Kähler coupling at the boundary. 2. Decoupling would allow to realize number theoretical universality since the field equations would not depend on coupling parameters at all. It is very difficult to imagine how the solutions could be expressible in terms of rational functions with coefficients in algebraic extension of rationals unless αK and Λ have very special relationship. If they have different phases, minimal surface extremals of Kähler action are automatically implied. If the values of αK correspond to complex zeros of Riemann ζ, also Λ should have same complex phase, in order to have genuine classical coupling. This looks somewhat un-natural but cannot be excluded. The most natural option is that Λ is real and αK corresponds to zeros of zeta. For trivial zeros the phases are different and decoupling occurs. For trivial zeros Λ and αK differ by imaginary unit so that again decoupling occurs. 3. One can argue that the decoupling makes it impossible to understand coupling constant evolution. This is not the case. The point is that the classical charges assignable to super-symplectic algebra are sums over contributions from Kähler action and volume term and therefore depend on the coupling parameters. Their vanishing conditions for sub-algebra and its commutator with entire algebra give boundary conditions on preferred extremals so that coupling constant evolution creeps in classically! The condition that the eigenvalues of fermionic charge operators are equal to the classical charges brings in the dependence of quantum charges on coupling parameters. Since the elements of scattering matrix are expected to involve as building bricks the matrix elements of super-symplectic algebra and Kac-Moody algebra of isometry charges, one expectes that discrete coupling constant evolution creeps in also quantally via the boundary conditions for preferred extremals. Although the above arguments seem to kill the idea that the dynamics of Kähler action and volume term could couple in space-time interior, one can compare this view (Option II) with the view based on complete decoupling (Option I). 1. For Option I the coupling between the two dynamics could be induced just by the condition that the space-time surface becomes an analog of geodesic line by arranging its interior so that the U(1) force vanishes! This would generalize Chladni mechanism! The interaction would be present but be based on going to the nodal surfaces! Also the dynamics of string world sheets is similar: if the string sheets carry vanishing W boson classical fields, em charge is well-defined and conserved. One would also avoid the problems produced by large coupling constant between the two-dynamics present already at the classical level. At quantum level the fixed point property of quantum critical couplings would be the counterparts for decoupling. 2. For Option II the coupling is of conventional form. When cosmological constant is small as in the scale of the known Universe, the dynamics of Kähler action is perturbed only very slightly by the volume term. The alternative view is that minimal surface equation has a very large perturbation proportional to the inverse of Λ so that the dynamics of Kähler action could serve as a controller of the dynamics defined by the volume term providing a small push or pull now and then. Could this sensitivity relate to quantum criticality and to the view about morphogenesis relying on Chladni mechanism in which field patterns control the dynamics with charged flux tubes ending up to the nodal surfaces of (Kähler) electric field (see this)? Magnetic flux tubes containing dark matter would in turn control and serve as template for the dynamics of ordinary matter. Could the possible coupling of the two dynamics suggest any ideas about the values of αK and Λ at quantum criticality besides the expectation that cosmological constant is proportional to an inverse of p-adic prime? 1. Number theoretic vision suggests the existence of preferred extremals represented by rational functions with rational or algebraic coefficients in preferred coordinates. For Option I one has preferred extremals of Kähler action which are minimal surfaces so that there is no coupling and no constraints on the ratio of couplings emerges: even better, both dynamics are independent of the coupling. All known non-vacuum extremals of Kähler action are indeed also minimal surfaces. For Option II the ratio of the coefficients Λ/8π G and 1/4παK should be rational or at most algebraic number. One must be however very cautious here: the minimal option allowed by strong form of holography is that the rational functions of proposed kind emerge only at the level of partonic 2-surfaces and string world sheets. 2. I have proposed that that the inverse of Kähler coupling strength has spectrum coming as zeros of zeta or their imaginary parts (see this). The phases of complexified 1/αK and Λ/2G must be same in order to avoid the decoupling of Kähler action and minimal surface term implying minimal surface extremals of Kähler action. This conjecture is consistent with the rational function property only if αK and vacuum energy density ρvac appearing as the coefficient of volume term are proportional to the same possibly transcendental number with proportionality coefficient being an algebraic or rational number. If the phases are not identical (say Λ is real and one allows complex zeros) one has Option I and effective decoupling occurs. The coupling (Option2)) can occur for the trivial zeros of zeta if the volume term has coefficient iΛ/8πG rather than Λ/8π G to guarantee same phase as for 1/4παK. The coefficient iΛ/8πG would give in Minkowskian regions large real exponent of volume and this looks strange. In this case also number theoretical universality might make sense but SH would be broken in the sense that the space-time surfaces would not be analogous to geodesic lines. 3. At quantum level number theoretical universality requires that the exponent of the total action defining vacuum functional reduces to the product of roots of unity and exponent of integer existing in finite-dimensional extension of p-adic numbers. This would suggest that total action reduces to a number of form q1+iq2π, qi rational number, so that its exponent is of the required form. Whether this can conform with the properties of zeros of zeta and properties of extremals is not clear. ZEO suggests deep connections with the basic phenomenology of particle physics, quantum consciousness theory, and quantum biology and one can look the situation for both these options. 1. Option I: Decoupling of the dynamics of Kähler action and volume term in space-time interior for all values of coupling parameters. 2. Option II: Coupling of dynamics for trivial zeros of zeta and Λ→ iΛ. Particle physics perspective Consider a typical particle physics experiment. There are incoming and outgoing free particles moving along geodesics, these particles interact, and emanate as free particles from the interaction volume. This phenomenological picture does not follow from quantum field theory but is put in by hand, in particular the idea about interaction couplings becoming non-zero is involved. Also the role of the observer remains poorly understood. The motion of incoming and outgoing particles is analogous to free motion along geodesic lines with particles generalized to 3-D extended objects. For both options these would correspond to the preferred extremals in the complement of CD within larger CD representing observer or measurement instrument. Decoupling would take place. In the interaction volume interactions are "coupled on" and particles interact inside the volume characterized by causal diamond (CD). What could be the TGD view translation of this picture? 1. For Option I one would still have decoupling and the interpretation would be in terms of twistor picture in which one always has also in the internal lines on mass shell particles but with complex four-momenta. In TGD framework the momenta would be always complex due to the contribution of Euclidian regions defining the lines of generalized scattering diagrams. As explained coupling constant evolution can be understood also in this case and also classical dynamics depends on coupling parameters via the boundary conditions. The transitory period (control action) leading to the decoupled situation would be replaced by state function reduction, possibly to the opposite boundary. 2. For Option II the transitory period would correspond to the coupling between the two classical dynamics and would take place inside CD after a phase transition identifiable as "big state function reduction" to time reversed mode. The problem is that in the interacting phase αK would not have a value approximately equal to the U(1) coupling strength of weak interactions (see this) so that the physical picture breaks down. Quantum measurement theory in ZEO. 1. For Option I state preparation and state function reduction would be in symmetric role. Also now there would be inherent asymmetry between zero energy states and their time reversals. With respect to observer the time reversed period would be invisible. 2. For Option II state preparation for CD would correspond to a phase transition to a time reversed phase labelled by a trivial zero of zeta and Λ→ iΛ. In state function reduction to the original boundary of CD a phase transition to a phase labelled by non-trivial zero of zeta would occur and final state of free particles would emerge. The phase transitions would thus mean hopping from the critical line of zeta to the real axis and back and change the values of αK and possibly Λ. There would be strong breaking in time reversal symmetry. One cannot of course take this large asymmetry as an adhoc assumption: it should be induced by the presence of larger CD, which could also affect quite generally the values of αK and Λ (having also a spectrum of values). TGD inspired theory of consciousness What happens within sub-CD could be fundamental for the understanding of directed attention and sensory-motor cycle. 1. The target of directed attention would correspond to the volume of CD - call it c - within larger CD - call it C representing the observer - attendee having c as part of its perceptive field. c would serve as a target of directed attention of C and thus define part of the perceptive field of c. c would correspond also to sub-self giving rise to a mental image of C. This would also allow to understand why the attention is directed rather than being completely symmetric with respect to C and c. For both options directed attention would correspond to sub-self c interpreted as mental image. There would be no difference. 2. Quite generally, the self and time-reversed self could be seen as sensory input and motor response (Libet's findings). Directed attention would define the sensory input and sub-self could react to it by dying and re-incarnating as time-reversed subself. The two selves would correspond to sensory input and motor action following it as a reaction. Motor reaction would be sensory mental image in reversed time direction experienced by time reversed self. Only the description for the reaction would differ for the two options. The motor action would be time-reversed sensory perception for Option I. For Option II motor action would correspond to a different phase in which Kähler action and volume term couple classically. TGD inspired quantum biology The free geodesic line dynamics with vanishing U(1) Kähler force indeed brings in mind the proposed generalization of Chladni mechanism generating nodal surfaces at which charged magnetic flux tubes are driven (see this). 1. For Option I the interiors of all space-time surfaces would be analogous to nodal surfaces and state function reductions would correspond to transition periods between different nodal surfaces. The decoupling would be dynamics of avoidance and could highly analogous to Chladni mechanism. 2. For Option II the phase labelled by trivial zeros of zeta would correspond to period during which nodal surfaces are formed. This view about state function reduction and preparation as phase transitions in ZEO would provide classical description for the transition to the phase without direct interactions. To sum up, it seems that the complete decoupling of the two dynamics (Option I) is favored by both SH, realization of preferred extremal property (perhaps as minimal surface extremals of Kähler action, number theoretical universality, discrete coupling constant evolution, and generalization of Chladni mechanism to a dynamics of avoidance. For background see the new chapter How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD? of "Towards M-matrix" or the article with the same title. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Thursday, November 24, 2016 More precise interpretation of gravitational Planck constant The notion of gravitational Planck constant hgr=GMm/v0 was introduced originally by Nottale. In TGD it was interpreted in terms of astrophysical quantum coherence. The interpretation was that hgr characterizes a gravitational flux tube connecting masses M and m and v0 is a velocity parameter - some characteristic velocity assignable to the system. It has become clear that a more precise formulation of the rather loose ideas about how gravitational interaction is mediated by flux tubes is needed. 1. The assumption treats the two masses asymmetrically. 2. A huge number of flux tubes is needed since every particle pair M-m would involve a flux tube. It would be also difficult to understand the fact that one can think the total gravitational interaction in Newtonian framework as sum over interactions with the composite particles of M. In principle M can be decomposed into parts in many manners - elementary particles and their composites and larger structures formed from them: there must be some subtle difference between these different compositions - all need not be possible - not seen in Newtonian and GRT space-time but maybe having representation in many-sheeted space-time and involving hgr. 3. Flux tube picture in the original form seems to lead to problems with the basic properties of the gravitational interaction: namely superposition of gravitational fields and absence or at least smallness of screening by masses between M and m. One should assume that the ends of the flux tubes associated with the pair pair M-m move as m moves with respect to M. This looks too complex. Linear superposition and absence of screening can be understood in the picture in which particles form topological sum contacts with the flux tubes mediating gravitational interaction. This picture is used to deduce QFT-GRT limit of TGD. Note that also other space-time sheets can mediate the interaction and pairs of MEs and flux tubes emanating from M but not ending to m are one possible option. In the following I however talk about flux tubes. These problems find a solution if hgr characterizes the magnetic body (MB) of a particle with mass m topologically condensed to a flux tube carrying total flux M. m can also correspond to a mass larger than elementary particle mass. This makes the situation completely symmetric with respect to M and m. The essential point is that the interaction takes place via touching of MB of m with flux tubes from M. 1. In accordance with the fractality of the many-sheeted space-time, the elementary particle fluxes from a larger mass M can combine to a sum of fluxes corresponding to masses Mi<M with ∑ Mi=M at larger flux tubes with hbargr=GMMi/v0,i> hbar. This can take place in many manners, and in many-sheeted space-time gives rise to different physical situations. Due to the large value of hgr it is possible to have macroscopic quantum phases at these sheets with a universal gravitational Compton length Lgr= GMim/v0. Here m can be also a mass larger than elementary particle mass. In fact, the convergence of perturbation theory indeed makes the macroscopic quantum phases possible. This picture holds true also for the other interactions. Clearly, many-sheeted space-time brings in something new, and there are excellent reasons to believe that this new relates to the emergence of complexity - say via many-sheeted tensor networks (see this). 2. Quantum criticality would occur near the boundaries of the regions from which flux runs through wormhole contacts from smaller to larger flux sheets and would be thus associated with boundaries defined by the throats of wormhole contacts at which the induced metric changes from Minkowskian to Euclidian. 3. This picture implies that fountain effect - one of the applications of large hgr phase is a kind of antigravity effect for dark matter - maybe even for non-microscopic masses m - since the larger size of MB implies larger average distance from the source of the gravitational flux and the experienced gravitational field is weaker. This might have technological applications some day. This picture is a considerable improvement but there are still problems to ponder. In particular, one should understand why the integer n= heff/h= hgr/h interpreted as a number of sheets of the singular covering space of MB of m emerges topologically. The large value of hgr implies a huge number of sheets. Could the flux sheet covering associated with Mi code the value of Mi using as unit Planck mass as the number of sheets of this covering? One would have N=M/MPl sheeted structure with each sheet carrying Planckian flux. The fluxes experienced by the MB of m in turn would consist of sheets carrying fusion nm= MPlv0/m Planckian fluxes so that the total number of sheets would be reduced to n= N/nm= GMm/v0 sheets. Why this kind of fusion of Planck fluxes to larger fluxes should happen? Could quantum information theory provide clues here? And why v0 is involved? For background see the chapter Criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy". For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Monday, November 21, 2016 About interactions of selves and their time reversals (and few words about ghosts) I have been thinking about the time-reversed zero energy states and corresponding conscious entities predicted by Zero Energy Ontology (ZEO). I find that forcing myself to even write about this is difficult. The fear is that the whole nice scenario falls down by predicting something totally absurd. I however force myself to make the questions. What could these ghostly time-reversed entities be? Do they interact with those with standard time orientation? How could they do so? In ZEO self corresponds to a generalized Zeno effect that is sequence of state function reductions leaving the passive boundary of CD unaffected as also the members of state pairs associated with 3-surfaces at it. At active boundary the members of state pairs change and the active boundary drifts reduction by reduction farther away from passive boundary. The temporal distance between the tips of CD increases gradually and corresponds to the experience about flow of time. Negentropy Maximization Principle (NMP) forces eventually self to die by making the first reduction to the passive boundary of its causal diamond (CD), which now becomes the active boundary: a new time reversed self is born. This option is forced because it produces more negentropy. For this self the arrow of geometric time would be opposite since now the formerly passive boundary would be active and shift in opposite direction of time: in this manner CD would steadily increase in size. Also the time-reversed self would eventually die and make the first reduction to the opposite - the original - boundary of CD. The position of the boundary of active boundary in first reduction would be shifted to the geometric future from the original position. The first and - as will be found - probably wrong guess for the size of shift towards geometric future from the position at the moment of previous death would be as the average increase of the temporal distance between tips of CD during Zeno period. This increment could be rather small as compared to the size of CD itself. This picture raises questions. 1. Do we make this kind jump to time-reverse life at some level of our personal self hierarchy as we fall sleep? If wake-up period corresponds to re-incarnation in the original time direction, time increment of CD from its previous value would be the duration of sleeping period as seen by a larger conscious system. This is much longer than the subjective chronon for sensory mental images about .1 seconds. Remark: Note that EEG splits to pieces of duration about 300 ms as found by Fingelkurz brothers (see this) and it might be possible to identify in EEG periods, which correspond to mental images and their time reversals. These periods could differ by a phase conjugation although the power spectrum would have the same typical behavior (sound wave and its phase conjugate have same power spectrum but we can distinguish sound and its time-reversal from each other). Could the first big reduction correspond to a time increment, which is of the same order of magnitude as the total time duration of life-cycle of the time-reversed self? The size of 3-surfaces at the boundary of time-reversed CD has increased by about life-time. Could the first reduction to the opposite boundary increase the size of the 3-surface at this boundary by the same amount? If so, the re-incarnations for human life cycles would take roughly life-time after the death. Could one identify negative energy time reversed signal as time-reversed self at some level of hierarchy? If so then the selves associated with CDs could gradually increase their energy by dying and re-incarnating repeatedly since the opposite boundary would increase also the magnitude of the negative energy at the opposite boundary. This is in principle possible since conservation laws hold true by the very definition of zero energy states as well as for classical time evolutions appearing in their quantum superposition. The average energy for a given member of pair defining zero energy state would increase gradually. The size of the CD associated with re-incarnating self could become arbitrary large and gain an arbitrary high total energy: the wildest speculation is that cosmologies correspond to very large selves (see this). 2. Could selves/systems living in opposite directions of time have direct interactions? If the vision that motor actions are realized as negative energy signals travelling to brain of the geometric past and induce neural activity fraction of second earlier than the conscious decision was made (Libet's finding), this could be the case. Motor action could correspond to a death of sensory self, reincarnation as time-reversed motor-self, and a re-incarnation as sensory self in time scale of .1 seconds. Sensory-motor cycle would correspond to a sequence of re-incarnations as time reversed sub-self. 3. How the time reversed selves could reveal themselves? If their presence can be indeed detected, a key signature would be the opposite direction of the thermodynamical arrow of time for them. Heat would be apparently transferred in wrong direction: from cold to hot. This kind of apparent breakings of second law have been observed: phase conjugate laser waves and acoustic signals represent examples of this. Fantappie suggested that they occur routinely in living matter and introduce the notion of syntropy as time reverse counterpart of entropy. The strange cooling of the air at magnetic walls associated with the rotating magnetic systems (see this) provides second example. 4. Good music is claimed to send cold shivers in spine and sensations of cold are assigned also with the perception of ghosts. Could the claims about encounters of ghosts be due to a perception of time reversed selves? I remember that in my personal great experience for three decades ago the entire body went into a state analogous to that created by a good music. Did I interact with a time reversed conscious entity? My experience indeed was that I was in contact with what I called Great Mind. This is of course just a subjective experience and the skeptic scientist knows that I was in a psychotic state since it is completely obvious from my scientific work even without reading it that I am a madman;-). For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Weight change for electrets and "weight of soul" Also the weight of electrets have been found to change as the Research Gate conference article or Schreiber and Tajmar reports (see this). They refer also to other works reporting anomalous looking weight changes. Recall that electrets are systems possessing spontaneous electric polarization and therefore analogous to magnets. Electret property allows to transform electric signals to mechanical and vice versa. Living systems are full of electrets. Electrets were produced from organic materials (organic origin might be relevant) by a procedure described in the article beginning with melting at temperature 120 degrees Celsius to a molten state followed by an application of an external high voltage (10 kV) electrostatic field forcing the microscopic electric dipoles to orient in parallel leading to complete solidification until room temperature was reached. Fig. 3 describes the schematic model for the resulting electret containing parallel electric dipoles and free positive and negative charges. The polarization of the electret its not completely stable and can change or disappear. There are two kinds of free charges near the ends of the electret: the region near negative pole contains more positive than negative charges and the region near positive pole more negative than positive charges. There are two kinds of charges known as heterocharges and homocharges and these charges have different relaxation times. Therefore the relaxation can lead to change of the polarization voltage and even of its direction. Two kinds of measurement were performed. Both the resulting polarization of electret and its weight were measured in the first experiment (see Fig. 7 of this). The voltage for these electrets changed after half an hour: the voltage dropped first from 3 kV to about 2.82 kV and then suddenly jumped to 3.425 kV. The weight showed after an initial fluctuation period a sharp increase to a saturation value taking place after 5.5 hour so that there was 5 hour lag. For an unpolarized electret the weight was found to increase steadily (see Figure 9 of this). The overall change of the weight during 20 hours was Δ g/g∼ 2× 10-4 in both measurements. The change of the electric field of the polarized electret was accompanied by an increase of weight followed by a fluctuating period with vanishing average weight increase followed by a sudden increase after 5 hours followed by steady increase. The overall change in both cases was about Δ g/g∼ 2× 10-4. Maybe the behavior of polarized electret could be seen as that of a depolarized electret perturbed by the change in the value of polarization. There was 5 hour lag before the sudden change in Δ g/g: as if the steady weight increase occurring for electret with no polarization had been prevented by the change of the polarization and transformed to a fluctuation lasting for about 5 hours before returning to nearly normal value. The challenge is to understand the cause of weight increase and why it was affected by the change in polarization. The models for the weight change of a rotating magnetic system and for the weight change induced by the presence of light-box suggests that the continual feed of dark photons transformed to ordinary photons was involved. One can consider two options in this framework: the electret sends negative energy dark photons to some system below the electret able to receive them or the source system located above the electret sends positive energy dark photons to the electret. 1. Since the electret system consists of organic material one might think that it could still be able to regenerate a connection to its magnetic body carrying magnetic field - say the endogenous magnetic field Bend=.2 Gauss. Perhaps the transformation to electret returned the ability to regenerate this connection by generating an ordered phase of dipoles: could one say that the external field "revived the organic material. 2. The magnetic body located above the system send dark positive energy photons to the electret in which they are partially transformed to ordinary photons. Bend can have flux tubes also below the Earth's surface and the electret could get energy by remote metabolism by sending negative energy dark photons downwards constantly. This would give rise to a increase of the effective weight. What other models can one imagine? 1. One can also imagine that dark mass of order Δ m/m∼ 2× 10-4 flows from magnetic body to the system and transforms to ordinary matter. 2. I have already earlier encountered the number 2× 10-4 assigned with endogenous magnetic field BE=.2 Gauss . The proposed interpretation was that the flux tubes of Bend correspond to gravitational flux tubes for dark mass MD∼ 2× 10-4 ME . Could one think that the revived system regenerates gravitational flux tube connections to this mass and experiences the gravitational field generated by it? The arguments used however strongly suggest that MD must reside at the distance of Moon at a spherical layer: this conforms with the vision about how the condensation of visible matter around dark matter creates the astrophysical objects. In Newton's theory however the net gravitational force should be very small at the surface of Earth since different contributions to the force would interfere. MD should reside considerably below the surface of Earth for this model to make sense. Flux tube picture distinguish between TGD and Newton's theory could however save the situation: the gravitational flux would arrive along flux tubes through wormhole contacts below the surface of Earth and then spread out radially and give an additional contribution to the Earth's gravitational field and cause the weight increase. This explanation does not apply to rotating magnetic systems nor to the change of weight due to light. The objection is that the system cannot just decouple from the flux tubes. Also the conservation of gravitational flux which could correspond basically to the conservation of Kähler magnetic monopole flux prevents this. 3. The third option is that the mass of electret has also dark contribution coming perhaps from its own personal MB - its "soul"! MB as intentional agent indeed behaves in many respect like "soul". This is just what I have proposed many years ago: as the ageing biological body gets uninteresting, MB finds more interesting target of attention. In this case death would mean the loss of MB and also loss of weight Δ m/m≈ 2× 10-4. Also Earth could have magnetic body and it could indeed correspond to the dark mass at distance of Moon if the ratio MD/M is universal. Also Earth could have MB and it could indeed correspond to the dark mass at distance of Moon. Could the flux tubes from Earth carrying monopole flux go at this distance to another space-time sheet through wormhole contacts carrying quantum numbers of dark matter particles at their throats and return near Earth's core, where they would return to the original space-time sheet and turn back to form a loop? Could these loops be just elementary particles with heff=hgr? An interesting test is to see what happens as organism dies: is it weight changed - reduced - as these experiments would suggest? For a weight of 100 kg the weight reduction would be 20 g if one can extrapolate from the above measurements. Amusingly, the "weight of soul" has been measured and - believe or not - the average result is 21 g (see this! Of course, one can invent many explanations for the weight change and also challenge its occurrence, and skeptics of course ridiculize the idea about detecting the possible weight change because some-one has uttered the word "soul" in this context. For details see the article The anomalies in rotating magnetic systems as a key to the understanding of morphogenesis? For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Friday, November 18, 2016 Could the presence of light affect weight? We had an intense Facebook discussion yesterday with Ulla, Joseph, and Sebastian on rotating magnetic systems and some-one gave a link to a very interesting experiment in which light arrives horizontally in a box and is reflected there in forth and back in a 6-layered structure (see this). It is reported that the presence of light-box reduces the gravitational force on an object above the box and increases it for an object below the light-box. Could TGD explanation be similar to that as for the reduction of weight of rotating system in Godin-Roschin experiment? This might be the case although the reduction of weight is fraction of order .1 per cent and much smaller than the maximal reduction of 35 per cent in G&R experiment. This could be understood if dark photons with energies scale up by a factor heff/h=n= hgr result as a small leakage from ordinary photons or vice versa. In G&R experiments the beam of photons arriving the system is dark. In the intense FB discussion I misunderstood the system: laser light is not used and it seems that the photons travelling back and forth in the light-box can have almost arbitrary direction due to the repeated reflections. I hope that I understand correctly at this time. I must admit that could not fully understand the illustration of the light-box. Anyone with better visual skills is well-come to help! The next trial to explain the effect led to a strange conclusion: the momentum direction for the dark photons exchanged between the light-box and test mass must be opposite to the momentum. This violates Quantum Classical Correspondence (QCC) , which is basic principle of TGD. In the light-hearted brainstorming mood I was ready to accept this but soon realized that this won't go. After that it was easy to see that Zero Energy Ontology (ZEO) solves this problem. This however leads to a dramatically new manner to interpret gravitation and also other interactions. This interpretation is however not in conflict with existing physics although it would conform with the vision of Sheldrake. Consider first how gravitational force by dark gravitons giving rise to momentum exchange along the flux tubes connecting the test mass to Earth could work. 1. The weight gets momentum increments Δp assignable to gravitons with some rate and this gives rise to net momentum transfer rate dp/dt defining gravitational force. Reaction law holds in the sense that mass gets a momentum increment Δp when a momentum -Δp travels along flux tube to Earth getting opposite momentum increment. Note that the direction of Δp is opposite to the direction of travel of graviton in positive energy ontology! Also the energy of the graviton is negative. 2. This does not conform with the classical expectation about (virtual) gravitons as a localized wave packets. Momentum increment -Δp can be said to travel in direction of Δp rather that in its own direction as one might expect! How could one cure this problem? 1. Should one give up QCC although it is basic principle of TGD? Could one argue that gravitation is quantum macroscopic interaction - something totally different from say entropic gravity - and one must speak of non-localized waves of momentum Δp in the scale of the entire system even in astrophysical situations so that classical intuition fails. This is what TGD indeed predicts via heff/h=n= hgr hypothesis. 2. Or should one replace positive energy ontology with ZEO and interpret the momentum exchange as taking place in reverse time direction. ZEO could allow to achieve this correspondence in terms of remote metabolism in which test mass sends negative energy dark gravitons travelling in reversed direction of geometric time to a system able to absorb them and gains positive energy as a recoil. Test mass would send to the geometric negative energy dark gravitons with momentum -Δp (this momentum is directed upwards to the light box getting positive energy gain and downwards direct Δp as a recoil. The QCC would not be lost because of time reversal. Since the virtual graviton propagates backwards in time, QCC is true: situation is PT reversal of a positive energy dark graviton with momentum Δp propagating in its own direction. 3. Are planets then primitive conscious entities soaking up gravitational energy from Sun?! From this there is not a long way to the idea that living organisms on Earth soak up energy from Sun also a dark photons. All physical systems are trying to steal energy from each other! One can safely give up the belief that Nature is somehow innocent. This sounds a pre-Keplerian idea but in ZEO it need not be inconsistent with basic laws of physics. This picture conforms with the views of Sheldrake about learning and morphogenesis. Consider now the experiment in this picture. What would happen as one adds light-box below the test mass? 1. This picture about gravitational force as remote metabolism generalizes to the recent case by replacing negative energy dark gravitons with negative energy dark photons. Test mass would be a primitive living system and would gradually learn to utilize light-box as an energy source using remote metabolism. This would conform with the observation that it takes time for the effect to emerge. 2. Test mass would send negative energy dark photons along gravitational flux tubes and some fraction of them would be absorbed by the light-box as they transform to negative energy bio-photons with some rate - at least if quantum criticality in some sense is realized: in what sense remains an open question. Does quantum criticality develop during the time needed for the effect to emerge. Certainly the fact that the photons in the light-box have energies in the range covered by bio-photon energies matters. 3. If negative energy dark photons have Δ p parallel to the direction of motion with reversed arrow of time, Δ p is directed downwards and the effective weight increases if the box is below the test mass. If the box is above the test mass the effective weight is reduced. This is what has been reported in the article. From the size of the reduction of mass about 1 per cent one in principle could get idea about the rate for the transformation of dark photons to ordinary visible photons. 4. A related TGD inspired suggestion is that topological light rays (MEs) parallel to the magnetic flux tubes mediating the gravitational interaction are generated and dark photons can be assigned to them. The fundamental property of MEs is that pulses can propagate only in single direction and this could relate closely to the sign of the force. Dark photon Bose-Einstein condensate propagating in single direction is generated as photons from the light-box transform to dark bosons. For given ME all dark photons must be collinear just like the classical pulses inside ME propagate only in single direction. The direction would be towards the test mass and opposite to the direction of momentum exchange involved to make the interaction attractive. Also now the TGD analogs of standing waves might be involved and would correspond to pairs of "plane wave" MEs such that the sums of their em fields are standing waves. 5. What is interesting that this model could also explain the well-known fluctuations in the value of gravitational constant measurements (see this and this ). Also Sheldrake has noticed the variation (see this). The largest variation is about one percent from the average value, and there is evidence that the measured value varies periodically with a period of one sidereal day (galaxy as rest system). This suggests that the test mass soaks energy from the flux tubes of galactic magnetic field: I have indeed proposed that they mediate the gravitational interaction of Earth (the local geometric entanglement of galactic flux tubes could be essential for the formation of various biological or even more general material structures). The effectiveness of soaking could depend on the angle characterizing the orientation of the gravitational flux tubes with respect to the line connecting Earth to Galactic center varying in the range [0,π]. The effectiveness could also depend on the position of Earth at its orbit around Sun giving annual variation: could the local density of the galactic flux tubes have periodic variation? There are also other interesting appearances of sidereal day and year in living matter (see this). The long measurement times should tend to affect the measured value of the gravitational constant G. One should arrange the instruments so that the are not below or above the test mass. One can criticize the idea. 1. Skeptic of course argues that the assumption about all matter having some aspects assigned to living systems is worst kind of pseudo-science that they have ever met and that now these quantum crackpots try to bring physics back to pre-Keplerian times. ZEO is however completely consistent with basic laws of classical physics and quantum physics. The fact is that TGD predicts that dark matter as a key aspect of what it is to be living. Adelization of physics means that cognition is present in all scales - already in elementary particle length scales as the success of p-adic mass calculations suggests. TGD also predicts hierarchy of conscious entities. Also skeptics explain all our activities in terms of conscious choices. Maybe also skeptics should finally accept free will as a fact and try to explain it scientifically. The consolating news for skeptics is that in ZEO one can indeed assign to consciousness causal powers without ending up with conflict with the laws of physics. 2. Physicalist would argue that one can just assume that light-box has additional attractive interaction with test mass analogous to gravitational interaction. This interaction should be electromagnetic, certainly not the extremely weak gravitational interaction. Coulomb attraction is probably not in question. The interaction energy for this interaction should increase as the distance between test mass and light-box decreases to give attractive force as gradient of interaction energy - just as in the case of gravitation. If this picture is correct, one should be able to express this interaction in more familiar terms. For details see the new chapter The anomalies in rotating magnetic systems as a key to the understanding of morphogenesis? of "TGD and fringe physics" or the article with the same title. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Thursday, November 17, 2016 During almost two decades I have returned repeatedly to the fascinating but unfortunately un-recognized work of Roschin and Godin about rotating magnetic systems. With the recent advances in TGD it has become clear that the reported strange effects such as the change of weight proportional to the rotation velocity of rollers taking place above 3.3 Hz rotation frequency and rapid acceleration above 9.2 Hz up to frequency 10 Hz could provide clues for developing a general vision about morphogenesis of magnetic body, whose flux quanta can carry Bose-Einstein condensates of dark charged ions with given mass and charge if the hypothesis heff=n× h=hgr identifying dark matter as phases with non-standard value of Planck constant holds true. At this time my friend Samuli Pentikäainen re-stimulated my interest by sending some links to the files describing the patent of Godin and Roschin. We had a nice brain storming about the system, which eventually inspired the preparation of this article to clarify my recent views about the system. One can find from web a brief description of the rotating magnetic system (see this) and the english translation of the patent (see (see this). I am grateful for Samppa for these links and interesting discussions. The generalization of Chladni mechanism would provide a general model for how magnetic flux tubes carrying charged particles with given mass at given flux tube drift to the nodal surfaces giving rise to magnetic walls in the field of standing or even propagating waves assignable to "topological light rays" (MEs). Ordinary matter would in turn condense around these dark magnetic structures so that Chladni mechanism would serve as a general mechanism of morphogenesis. This mechanism could be universal and work even in astrophysical systems (formation of planets). The change of weight correlating with the direction of rotation (parity breaking) and rapid acceleration could be understood in terms of momentum and angular momentum transfer by dark photons liberated in the quantum phase transition of many-particle states of dark charged particles to from cyclotron Bose-Einstein condensates giving rise to analogs of superconductivity and spontaneous magnetization. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Thursday, November 10, 2016 Emergent gravity and dark Universe Eric Verlinde has published article with title Emergent Gravity and the Dark Universe > (see this). The article represents his recent view about gravitational force as thermodynamical force described earlier and suggests an explanation for the constant velocity spectrum of distant stars around galaxies and for the recently reported correlation between the real acceleration of distant stars with corresponding acceleration caused by baryonic matter. In the following I discuss Verlinde's argument and compare the physical picture with that provided by TGD. I have already earlier discussed Verlinde's entropic gravity from TGD view point (see this). The basic observation is that Verlinde introduces long range quantum entanglement appearing even in cosmological scales: in TGD framework the hierarchy of Planck constants does this in much more explicit manner and has been part of TGD for more than decade. It is nice to see that the basic ideas of TGD are gradually popping up in literature. Before continuing it is good to recall the basic argument against the identification of gravity as entropic force. As Kobakidzhe notices neutron diffraction experiments suggests that gravitational potential appears in the Schrödinger equation. This cannot be the case if gravitational potential has thermodynamic origin and therefore follows from statistical predictions of quantum theory: to my opinion Verlinde mixes apples with oranges. Verlinden's argument Consider now Verlinde's recent argument. 1. Verlinde wants to explain the recent empirical finding that the observed correlation between the acceleration of distant stars around galaxy with that of baryonic matter (see this) in terms of apparent dark energy assigned with entanglement entropy proportional to volume rather than horizon area as in Bekenstein-Hawking formula. This means giving up the standard holography and introducing entropy proportional to volume. To achieve this he replaces anti-de-Sitter space (AdS) with de-Sitter space(dS) space with cosmic horizon expressible in terms of Hubble constant and assign it with long range entanglement since in AdS only short range entanglement is believed to be present (area law). This would give rise to an additional entropy proportional to volume rather than area. Dark energy or matter would corresponds to a thermal energy assignable to this long range entanglement. 2. Besides this Verlinde introduces tensor nets as justification for the emergence of gravitation: this is just a belief. All arguments that I have seen about this are circular (one introduces 2-D surfaces and thus also 3-space from beginning) and also Verlinde uses dS space. What is to my opinion alarming that there is no fundamental approach really explaining how space-time and gravity emerges. Emergence of space-time should lead also to the emergence of spinor structure of space-time and this seems to me something impossible if one really starts from mere Hilbert space. 3. Verlinde introduces also analogy with the thermodynamics of glass involving both short range crystal structure and amorphous long range behaviour that would correspond to entanglement entropy in long scales long range structure. Also analogy with elasticity is introduced. Below Hubble scale the microscopic states do not thermalize below the horizon and display memory effects. Dark gravitational force would be analogous to elastic response due to what he calls entropy displacement. 4. Verlinde admits that this approach does not say much about cosmology or cosmic expansion, and even less about inflation. The long range correlations of Verlinde correspond to hierarchy of Planck constants in TGD framework The physical picture has analogies with my own approach (see this) to the explanation of the correlation between baryonic acceleration with observed acceleration of distant stars. In particular, long range entanglement has the ieentification of dark matter in terms of phases labelled by the hierarchy of Planck constants as TGD counterpart. 1. Concerning the emergence of space and gravitation TGD leads to a different view. It is not 3-space but the experience about 3-space - proprioception -, which would emerge via tensor nets realized in TGD in terms of magnetic flux tubes emerging from 3-surfaces defining the nodes of the tensor net (see this) . This picture leads to a rather attractive view about quantum biology (see for instance this). 2. Twistor lift of TGD has rapidly become a physically convincing formulation of TGD (see this). One replaces space-time surfaces in M4× CP2 with the 12-D product T(M4× CP2) of the twistor spaces T(M4) and T(CP2) and Kähler action with its 6-D variant. This requires that T(M4) and T(CP2) have Kähler structure. This is true but only for M4 (and its variants E4 and S4) and CP2. Hence TGD is completely unique also mathematically and physically (providing a unique explanation for the standard model symmetries). The preferred extremal property for Kähler action could reduce to the property that the 6-D surface as an extremal of 6-D Kähler action is twistor space of space-time surface and thus has the structure of S2 bundle. That this is indeed the case for the preferred extremals of dimensionally reduced 4-D action expressible as a sum of Kähler action and volume term remains to be rigorously proven. 3. Long range entanglement even in cosmic scales would be crucial and give the volume term in entropy breaking the holography in the usual sense. In TGD framework hierarchy of Planck constants heff=n× h satisfying the additional condition heff=hgr, where hgr=GMm/v0 (M and m are masses and v0 is a parameter with dimensions of velocity) is the gravitational Planck constant introduced originally by Nottale , and assignable to magnetic flux tubes mediating gravitational interaction makes. This makes possible quantum entanglement even in astrophysical and cosmological long length scales since hgr can be extremely large. In TGD however most of the the galactic dark matter and energy is associated with cosmic strings having galaxies along it (like pearls in necklace). Baryonic dark matter could correspond to the ordinary matter which has resulted in the decay of cosmic strings taking the role of inflaton field in very early cosmology. This gives automatically a logarithmic potential giving rise to constant spectrum velocity spectrum modified slightly by baryonic matter and a nice explanation for the correlation, which served as the motivation of Verlinde. 4. Also glass analogy has TGD counterpart. Kähler action has 4-D spin glass degeneracy giving rise to 4-D spin-glass degeneracy. In twistor lift of TGD cosmological term appears and reduces the degeneracy by allowing only minimal surfaces rather than all vacuum extremals. This removes the non-determinism. Cosmological constant is however extremely small implying non-perturbative behavior in the sense that the volume term for the action is extremely small and depends very weakly on preferred extremal. This suggests that spin glass in 3-D sense remains as Kähler action with varying sign is added. 5. The mere Kähler action for the Minkowskian (at least) regions of the preferred extremals reduces to a Chern-Simons terms at light-like 3-surfaces at which the signature of the induced metric of the space-time surface changes from Minkowskian to Euclidian. The interpretation could be that TGD is almost topological quantum field theory. Also the interpretation in terms of holography can be considered. Volume term proportional to cosmological constant given by the twistorial lift of TGD (see this) could mean a small breaking of holography in the sense that it cannot be reduced to a 3-D surface term. One must be however very cautious here because TGD strongly suggests strong form of holography meaning that data at string world sheets and partonic 2-surfaces (or possibly at their metrically 2-D light-like orbits for which only conformal equivalence class matters) fix the 4-D dynamics. Volume term means a slight breaking of the flatness of the 3-space in cosmology since 3-D curvature scalar cannot vanish for Robertson-Walker cosmology imbeddable as a minimal surface except at the limit of infinitely large causal diamond (CD) implying that cosmological constant, which is proportional to the inverse of the p-adic length scale squared, vanishes at this limit. Note that the dependence Λ∝ 1/p, p p-adic prime, allows to solve the problem caused by the large value of cosmological constant in very early cosmology. Quite generally, volume term would describe finite volume effects analogous to those encountered in thermodynamics. The argument against gravitation as entropic force can be circumvented in zero energy ontology Could TGD allow to resolve the basic objection against gravitation as entropic force or generalize this notion? 1. In Zero Energy Ontology quantum theory can be interpreted as "complex square root of thermodynamics". Vacuum functional is an exponent of the action determining preferred extremals - Kähler action plus volume term present for twistor lift. This brings in gravitational constant G and cosmological Λ constant as fundamental constants besides CP2 size scale R and Kähler coupling strength αK (see this). Vacuum functional would be analogous to an exponent of Ec/2, where Ec is complexified energy. I have also considered the possibility that vacuum functional is analogous to the exponent of free energy but following argument favors the interpretation as exponent of energy. 2. The variation of Kähler action would give rise to the analog of TdS term and the variation of cosmological constant term to the analog of -pdV term in dE= TdS- pdV. Both T and p would be complex and would receive contributions from both Minkowskian and Euclidian regions. The contributions of Minkowskian and Euclidian regions to the action would differ by a multiplication with imaginary unit and it is possible that Kähler coupling strength is complex as suggested in (see this). If the inverse of the Kähler coupling is strength is proportional to the zero of Riemann zeta at critical line, it is complex and the coefficient of volume term must have the same phase: otherwise space-time surfaces are extremals of Kähler action and minimal surfaces simultaneously. In fact, the known non-vacuum extremals of Kähler action are surfaces of this kind, and one cannot exclude the possibility that preferred extremals have quite generally this property. 3. Suppose that both terms in the action are proportional to the same phase factor. The part of the variation of the Kähler action with respect to the imbedding space coordinates giving the analog of TdS term would give the analog of entropic force. Since the variation of the entire action vanishes this contribution would be equal to the negative of the variation of the volume term with respect to the induced metric given by -pdV. Since the variations of Kähler action and volume term cancel each other, the entropic force would be non-vanishing only for the extremals for which Kähler action density is non-vanishing. The variation of Kähler action contains variation with respect to the induced metric and induced Kähler form so that the sum of gravitational and U(1) force would be equal to the analog of entropic force and Verlinde's proposal would not generalize as such. The variation of the volume term gives rise to a term proportional to the trace of the second fundamental form, which is 4-D generalization of ordinary force and vanishes for the vacuum extremals of Kähler action in which case one has analog of geodesic line. More generally, Kähler action gives rise to the generalization of U(1) force so that the field equations give a 4-D generalization of equations of motion for a point like particle in U(1) force having also interpretation as a generalization of entropic force. 4. There however an objection against this picture. All known extremals of Kähler action are minimal surfaces and there are excellent number theoretical arguments suggesting that all preferred extremals of Kähler action are also minimal surfaces so that the original picture would be surprisingly near to the truth. The separate vanishing of variation implies that the solutions do not depend at all on coupling parameters as suggested by number theoretical universality and universality of the dynamics at quantum criticality. The discrete coupling constant evolution makes it however visible via boundary conditions classically. This would however predicts that the analogs to TdS and pdV vanish identically in space-time interior. The variations however involve also boundary terms, which need not vanish separately since the actions in Euclidian and Minkowskian regions differ by multiplication with (-1)1/2! The variations reduce to terms proportional to the normal component of the canonical momentum current contracted with the deformation at light-like 3-surfaces bounding Euclidian and Minkowskian space-time regions. These must vanish. If Kähler coupling strength is real, this implies decoupling of the dynamics due to the volume term and Kähler action also at light-like 3-surfaces and therefore also exchange of charges - in particular four-momentum - becomes impossible. This would be a catastrophe. If αK is complex as quantum TGD as a square root of thermodynamics and the proposal that the spectrum of 1/αK corresponds to the spectrum of zeros of zeta require, the normal component of the canonical momentum current for Kähler action equals to that for the volume term at the other side of the bounding surface. The analog of dE=TdS-pdV=0 would hold true in the non-trivial sense at light-like 3-surfaces and thermodynamical analogy holds true (note that energy is replaced with action). The reduction of variations to boundary terms would also conform with holography. Strong form of holography would even suggest that the 3-D boundary term in turn reduces to 2-D boundary terms. A possible problem is caused by the variation of volume term: g41/2 vanishes at the boundary and gnn diverges. The overall result should be finite and should be achieved by proper boundary conditions. What I have called weak form of electric-magnetic duality allows to avoid similar problems for Kähler action, and implies self-duality of the induced Kähler form at the boundary. A weaker form of boundary conditions would state that the sum of the variations of Kähler action and volume term is finite. Physically this picture is very attractive and makes cosmological constant term emerging from the twistorial lift rather compelling. What is nice that this picture follows from the field equations of TGD rather than from mere heuristic arguments without underlying mathematical theory. See the article Emergent gravity and dark Universe or the chapter TGD and GRT of "Physics in Many-Sheeted Space-time". For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Wednesday, November 09, 2016 Muon surplus in high energy cosmic ray showers as an indication for new hadron physics According to the article "Viewpoint: Cosmic-Ray Showers Reveal Muon Mystery in APS Physics (see this) Pierre Auger Observatory reports that there is 30 per cent muon surplus in cosmic rays at ultrahigh energy around 1019 eV (see this). These events are at the knee of cosmic ray energy distribution: at higher energies the flux of cosmic rays should be reduced due to the loss of energy with cosmic microwave background. There are actually indications that this does not take place but this is not the point now. This article tells about how these showers are detected and also provides a simple model for the showers. This energy is estimated in the rest system of Earth and corresponds to the energy of 130 TeV in cm mass system for a collision with nucleon. This is roughly 10 times the cm energy of 14 TeV at LHC. The shower produced by the cosmic ray is a cascade in which high energy cosmic ray gradually loses its energy via hadron production. The muons are relatively low energy muons resulting in hadronic decays, mostly pion decays, since most of the energy ends up to charged pions producing muons and electrons and neutral pions decaying rapidly to gamma pairs. The electron-positron pairs produced in the electromagnetic showers from neutral pions mask the electrons produced in neutral pion decay to electrons so that the possible surplus can be detected only for muons. Since cosmic rays are mostly protons and nuclei the primary collisions should involve a primary collision of cosmic ray particle with a nucleon of atmosphere. The anomalously large muon yield suggests an anomalous yield of proton-antiproton pairs produced in the first few collisions. Protons and antiprotons would then collide with nuclei of atmosphere and lose their energy and give rise to anomalously large number of pions and eventually muons. Unless the models for the production (constrained by LHC data) underestimate muon yield, new physics is required to explain the source of proton-antiproton pairs is needed. In TGD framework one can consider two scaled up variants of hadron physics as candidates for the new physics. 1. The first candidate corresponds to M89 hadron physics for which hadron masses would be obtained by a scaling with factor 512 from the masses of ordinary hadrons characterized by Mersenne prime M107= 2107-1. There are several bumps bumps identifiable as pseudo-scalar mesons with predicted masses also some bumps identifiable as some scaled up vector mesons (see this). Also the unexpected properties of what was expected to be quark gluon plasma suggest M89 hadron physics. In particular, the evidence for string like states suggests M89 mesons. If the situation is quantum critical, M89 have scaled up Compton length. The natural guess is that it corresponds to the size of ordinary hadrons. The proton of M89(=289-1) hadron physics would have mass of 512 GeV so that the production of M89 hadrons could take place at energies, which for ordinary hadrons would correspond to 260 GeV meaning that perturbative M89 QCD could be used. The quarks of this hadron physics would hadronize either directly to ordinary M107 or to M89 hadrons. In both cases a phase transition like process would lead from M89 hadrons or to M107 hadrons and produce a surplus of protons and antiprotons whose collisions with the nuclei of atmosphere would produce a surplus of pions. 2. One can also consider M79 hadron physics, where MG,79 corresponds to Gaussian Mersenne (1+i)79-1. The mass scale would be 32 times higher than that for M89 hadron physics and correspond to 8 GeV for ordinary hadron collisions. Also now perturbative QCD would apply. One can argue that M89 or MG,79 hadron physics comes in play for collisions with small enough impact parameter and gives an additive contribution to the total rate of protons and antiproton production. The additional contribution would be of the same order of magnitude as that from M107 hadron physics. Could quantum criticality play some role now? 1. What is the situation is quantum critical with heff/h>1? The first naive guess is that at the level of tree diagrams corresponding to classical theory the production rate has has no dependence on Planck constant so that nothing happens. A less naive guess is that something similar to that possibly taking place at LHC and RHIC happens. Quantum critical collisions in which protons just pass by each other could yield dark pseudo-scalar mesons. 2. If quantum criticality corresponds to peripheral collisions, the rate for pseudo-scalar production would be large unlike for central collisions. The instanton action determined to a high degree by anomaly considerations would be determined the rate of production for pseudo-scalar mesons. Vector boson dominance would allow to estimate the rate for the production of vector bosons. Peripherality could make the observation of these collisions difficult: especially so if the peripheral collisions are rejected because they are not expected to involve strong interactions and be therefore uninteresting. This might explain the disappearance of 750 GeV bump. 3. Suppose that quantum criticality for peripheral collisions at LHC and RHIC enters into game above the mass scale of M89 pion with mass about 65 mp∼ 65 GeV and leads to creation of M89 mesons. By a simple scaling argument the same would happen in the case of MG,79 hadron physics above 65 mp(89)= 3.3 × 104 TeV to be compared with the collision energy of ultrahigh energy cosmic rays about 13× 104 TeV. See the article M89 Hadron Physics and Quantum Criticality or the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Physics". For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Tuesday, November 08, 2016 Induction coils in many-sheeted space-time I have been trying to concretize many-sheeted space-time by thinking what simple systems involving electric and magnetic fields would look like in many-sheeted space-time. The challenge is highly non-trivial since the basic difference between Maxwell's theory and TGD is that allows extremely limited repertoire of preferred extremals and there is no linear superposition. 1. By general coordinate invariance only 4 field like variables (say CP2 coordinates) are possible meaning that all classical fields identified as induced fields are expressible in terms of only four field like variables at at given sheet. This has several implications. The classical field equations determining the space-time surface theory is extremely non-linear although they have simple interpretation as expression for local conservation laws of Poincare charges and color charges. Linear superposition of Maxwell's equations is lost. Only for so called topological light rays ("massless extremals"), MEs) the linear superposition holds true but in extremely limited sense: for the analogous of plane waves travelling in either direction along ME. One has pulses of arbitrary shape preserving their shape and propagating in single direction only with maximal signal velocity. 2. Strong form of holography (SH) implies that 2-dimensional data at string world sheets and partonic 2-surfaces fix the space-time surfaces. 2-D data include also the tangent spaces of partonic 2-surfaces so that the situation is only effectively 2-D and TGD does not reduce to any kind of string model. It is possible that the light-like 3-surfaces defining parton orbits as the boundaries of Minkowskian and Euclidian space-time regions possess dynamical degrees of freedom as conformal equivalence classes. Kac-Moody type transformations trivial at the ends of partonic orbit at boundaries of causal diamond (CD) would generate physically equivalent partonic orbits. There would be n conformal equivalence classes, where n would correspond to the value of Planck constant heff=n× h. At the ends of orbit all these n sheets of the singular covering would co-incide. Possible additional degrees of freedom making partonic 2-surfaces somewhat 3-D would be therefore discrete and make possible dark matter in TGD sense. What is clear that single space-time sheet is very simple entity, and one can assign to it only extremely limited set of say solutions of Maxwell's equations. More complex solutions must correspond to many-sheeted space-time surfaces approximated as slightly curved pieces of Minkowski space at the GRT-QFT limit of TGD. This limit is discovered by noticing that a test particle touches all sheets of the space-time surface in a given region of Minkowski space - they are extremely near to each other. Test particle experiences sums for the induced gauge potentials and gravitational fields defined is deviations of the induced metric from flat Minkowski metric. These sums corresponds naturally to the gauge potentials and gravitational fields assignable to the GRT-QFT limit. One obtains GRT plus standard model. The challenge is to look whether one can indeed construct typical Maxwellian field configurations as sums of electromagnetic gauge potentials represented as induced gauge potentials at various sheets. The simplest configurations would be realizable using only two sheets. I have already considered the realization of standing waves not possible as single sheeted structures as ≥ 2-sheeted structures carrying the analogs of sinusoidal waves (see this). 1. The proposal is that magnetic bodies (MBs) use this kind of standing wave patters to generate biological structures: charged biomolecules would end up the nodal surfaces of the standing wave and become stationary structures. Of course, also time varying nodes are possible. 2. MB could use the MEs parallel to flux tubes connected to a given node of tensor network to generate biological structures at the node. Note that the interference patter would be completely analogous to that of a hologram but allowing more than two waves. As a matter of fact, I considered a vision about living systems as conscious holograms for decades ago (see this) but was not able to invent a concrete model at that time. This Chladni mechanism - as one might call it - could be a general mechanism of morphogenesis and morphostasis. Second challenge - raised by discussions with my friend Samppa - is provided by the field patterns of an inductance coil with AC current flowing around the boundary of cylinder. 1. The current is typically AC current. Oscillating magnetic field has direction parallel or opposite to the cylinder and electric field field lines rotate around the cylinder. That the geometry of field pattern is this is easy to understand by looking just the general form of the solutions of the Maxwell's equations in question. 2. What is essential is that one has standing wave type field pattern meaning that the fields at all points of the cylinder oscillates in the same phase. The temporal and spatial dependences of the magnetic field separate into product of sinusoidal function and spatial function, which in the simplest situation is constant. One might even regard the standing wave property as a signal of quantum coherence. Could one use MEs as building bricks to construct the field pattern associated with the coil? 1. MEs define an extremely general set of (hopefully preferred) extremals of Kähler action. Basic type of ME corresponds to cylindrical regions inside which pulses propagate in the same direction along the cylinder. It is also possible to have waves propagating radially: either inwards or outwards. The ansatz for ME is following. Restrict the consideration to a geodecic sphere S2 of CP2, assume that S2 coordinates (Θ,Φ) have dependence [sin(Θ)=sin(ω (t+/- ρ),Φ=nφ ], where [ρ,φ] are cylindrical radial and angle coordinates. The resulting magnetic field is in z-direction and the lines of the electric field rotate around the cylinder. The sum of these fields is a standing wave in radial direction representing just right kind of magnetic and electric fields. 2. What is amusing that the field experienced by the test particle would be expressible as sum of the two modes of TGD counterpart of radiation field. If the AC frequency is 50 Hz, the period of radial cylindrical wave characterized by wave vector k=ω (c=1) is of order wave length λ=2π/k, which is of order 107 meters, order of magnitude of Earth radius! Hence the longitudinal magnetic field is essentially constant for the coils encountered in practical situation. 3. The boundary of the cylinder carries the AC current. The description of this current is a further challenge to TGD and will not be considered here. One can of course have more general currents generating much more general waves, not necessary standing waves. 1. The general recipe would be simple. These fields can be expressed as a Fourier decomposition of simple sinusoidal field patters. Assign to each sinusoidal field pattern a space-time sheet in the proposed manner so that superposition for modes is replaced with union of space-time surfaces. 2. The more terms in the Fourier expansion, the larger the number of sheets for the many-sheeted space-time is. The number of space-time sheets gives a measure for the complexity of the system. For instance, a current with form of square pulse is an interesting challenge. Should one approximate the square pulse as a superposition of space-time sheets of its Fourier components? See the chapter TGD Based View about Classical Fields in Relation to Consciousness Theory and Quantum Biology of "TGD and EEG". For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD.
f1ee74cb6be5dd24
Saturday, July 12, 2008 DNA as topological quantum computer: XIV I have worked hardly in refining the chapters about quantum biology in TGD Universe. DNA as topological quantum computer was the original idea that has generalized considerably and led to a beautiful unification of various basic ideas. I attach below the abstract of the chapter DNA as topological quantum computer contained in the book "Genes and Memes". This chapter represents a vision about how DNA might act as a topological quantum computer (tqc). Tqc means that the braidings of braid strands define tqc programs and M-matrix (generalization of S-matrix in zero energy ontology) defining the entanglement between states assignable to the end points of strands define the tqc usually coded as unitary time evolution for Schrödinger equation. One can ends up to the model in the following manner. 1. Darwinian selection for which the standard theory of self-organization provides a model, should apply also to tqc programs. Tqc programs should correspond to asymptotic self-organization patterns selected by dissipation in the presence of metabolic energy feed. The spatial and temporal pattern of the metabolic energy feed characterizes the tqc program - or equivalently - sub-program call. 2. Since braiding characterizes the tqc program, the self-organization pattern should correspond to a hydrodynamical flow or a pattern of magnetic field inducing the braiding. Braid strands must correspond to magnetic flux tubes of the magnetic body of DNA. If each nucleotide is transversal magnetic dipole it gives rise to transversal flux tubes, which can also connect to the genome of another cell. As a matter fact, the flux tubes would correspond to what I call wormhole magnetic fields having pairs of space-time sheets carrying opposite magnetic fluxes. 3. The output of tqc sub-program is probability distribution for the outcomes of state function reduction so that the sub-program must be repeated very many times. It is represented as four-dimensional patterns for various rates (chemical rates, nerve pulse patterns, EEG power distributions,...) having also identification as temporal densities of zero energy states in various scales. By the fractality of TGD Universe there is a hierarchy of tqcs corresponding to p-adic and dark matter hierarchies. Programs (space-time sheets defining coherence regions) call programs in shorter scale. If the self-organizing system has a periodic behavior each tqc module defines a large number of almost copies of itself asymptotically. Generalized EEG could naturally define this periodic pattern and each period of EEG would correspond to an initiation and halting of tqc. This brings in mind the periodically occurring sol-gel phase transition inside cell near the cell membrane. There is also a connection with hologram idea: EEG rhythm corresponds to reference wave and nerve pulse patters to the wave carrying the information and interfering with the reference wave. 4. Fluid flow must induce the braiding which requires that the ends of braid strands must be anchored to the fluid flow. Recalling that lipid mono-layers of the cell membrane are liquid crystals and lipids of interior mono-layer have hydrophilic ends pointing towards cell interior, it is easy to guess that DNA nucleotides are connected to lipids by magnetic flux tubes and hydrophilic lipid ends are stuck to the flow. 5. The topology of the braid traversing cell membrane cannot be affected by the hydrodynamical flow. Hence braid strands must be split during tqc. This also induces the desired magnetic isolation from the environment. Halting of tqc reconnects them and make possible the communication of the outcome of tqc. There are several problems related to the details of the realization. 1. How nucleotides A,T,C,G are coded to the strand color and what this color corresponds to physically? There are two options which could be characterized as fermionic and bosonic. i) Magnetic flux tubes having quark and anti-quark at their ends with u,d and uc, dc coding for A,G and T,C. CP conjugation would correspond to conjugation for DNA nucleotides. ii) Wormhole magnetic flux tubes having wormhole contact and its CP conjugate at its ends with wormhole contact carrying quark and anti-quark at its throats. The latter are predicted to appear in all length scales in TGD Universe. 2. How to split the braid strands in a controlled manner? High Tc super conductivity provides a possible mechanism: braid strand can be split only if the supra current flowing through it vanishes. A suitable voltage pulse induces the supra-current and its negative cancels it. The conformation of the lipid controls whether it it can follow the flow or not. 3. How magnetic flux tubes can be cut without breaking the conservation of the magnetic flux? The notion of wormhole magnetic field could save the situation now: after the splitting the flux returns back along the second space-time sheet of wormhole magnetic field. An alternative solution is based on reconnection of flux tubes. Since only flux tubes of same color can reconnect this process can induce transfer of strand color: "color inheritance": when applied at the level of amino-acids this leads to a successful model of protein folding. Reconnection makes possible breaking of flux tube connection for both the ordinary magnetic flux tubes and wormhole magnetic flux tubes. 4. How magnetic flux tubes are realized? The interpretation of flux tubes as correlates of directed attention at molecular level leads to concrete picture. Hydrogen bonds are by their asymmetry natural correlates for a directed attention at molecular level. Also flux tubes between acceptors of hydrogen bonds must be allowed and acceptors can be seen as the subjects of directed attention and donors as objects. Examples of acceptors are aromatic rings of nucleotides, O= atoms of phosphates, etc.. A connection with metabolism is obtained if it is assumed that various phosphates XMP,XDP,XTP, X=A,T,G,C act as fundamental acceptors and plugs in the connection lines. The basic metabolic process ATP® ADP+Pi allows an interpretation as a reconnection splitting flux tube connection, and the basic function of phosphorylating enzymes would be to build flux tube connections as also of breathing and photosynthesis. The model makes several testable predictions about DNA itself. In particular, matter-antimatter asymmetry and slightly broken isospin symmetry have counterparts at DNA level induced from the breaking of these symmetries for quarks and antiquarks associated with the flux tubes. DNA cell membrane system is not the only possible system that could perform tqc like activities and store memories in braidings: flux tubes could connect biomolecules and the braiding could provide an almost definition for what it is to be living. Even water memory might reduce to braidings. The model leads also to an improved understanding of other roles of the magnetic flux tubes containing dark matter. Phase transitions changing the value of Planck constant for the magnetic flux tubes could be key element of bio-catalysis and electromagnetic long distance communications in living matter. For instance, one ends up to what might be called code for protein folding and bio-catalysis. There is also a fascinating connection with Peter Gariaev's work suggesting that the phase transitions changing Planck constant have been observed and wormhole magnetic flux tubes containing dark matter have been photographed in his experiments. Wednesday, July 09, 2008 A code for protein folding and bio-catalysis The TGD inspired model for the evolution of genetic code leads to the idea that the folding of proteins obeys a folding code inherited from the genetic code. After some trials one ends up with a general conceptualization of the situation with the identification of wormhole magnetic flux tubes as correlates of attention at molecular level so that a direct connection with TGD inspired theory of consciousness emerges at quantitative level. This allows a far reaching generalization of the DNA as topological quantum computer paradigm and makes it much more detailed. By their asymmetric character hydrogen bonds are excellent candidates for magnetic flux tubes serving as correlates of attention at molecular level. The constant part of free amino-acid containing O-H, O=, and NH2 would correspond to the codon XYZ in the sense that the flux tubes would carry the "color" representing the four nucleotides in terms of quark pairs. Color inheritance by flux tube reconnection makes this possible. For the amino-adics inside protein O= and N-H would correspond to YZ. Also flux tubes connecting the acceptor atoms of hydrogen bonds are required by the model of DNA as topological quantum computer. The long flux tubes between O= atoms and their length reduction in a phase transition reducing Planck constant could be essential in protein-ligand interaction. The model predicts a code for protein folding: depending on whether also =O-O= flux tubes are allowed or not, Y=Z or Y=Zc condition is satisfied by the amino-acids having N-H-O= hydrogen bond. For =O-O= bonds Y-Yc pairing holds true. Y=Zc option predicts the average length of alpha bonds correctly. Y=Z rule is favored by the study of alpha helices for four enzymes: the possible average length of alpha helix is considerably longer than the average length of alpha helix if gene is the unique gene allowing to satisfy Y=Z rule. The explicit study of alpha helices for four enzymes demonstrates that the failure to satisfy the condition for the existence of hydrogen bond fails rarely and at most for two amino-acids (for 2 amino-acids in single case only). For beta sheets there ar no failures for Y=Z option. The information apparently lost in the many-to-one character of the codon-amino-acid correspondence would code for the folding of the protein and similar amino-acid sequences could give rise to different foldings. Also catalyst action would reduce to effective base pairing and one can speak about catalyst code. The DNA sequences associated with alpha helices and beta sheets are completely predictable unless one assumes a quantum counterpart of wobble base pairing meaning that N-H flux tubes are before hydrogen bonding in quantum superpositions of braid colors associated with the third nucleotides Z of codons XYZ coding for amino-acid. Only the latter option works. The outcome is very simple quantitative model for folding and catalyst action based on minimization of energy and predicting as its solutions alpha helices and beta sheets. I want to express my gratitude for Dale Trenary for interesting discussions, for suggesting proteins which could allow to test the model, as well as providing concrete help in loading data help from protein data bank. Also I want to thank Timo Immonen for loaning the excellent book "Proteins: Structures and Molecular Properties" of Creighton. Also Pekka Rapinoja for writing the program transforming protein data file to a form readable by MATLAB. For details see the new chapter A Model for Protein Folding and Bio-catalysis of "Genes and Memes".
b19ab1459e0113f5
Relativistic quantum mechanics From Wikipedia, the free encyclopedia Jump to navigation Jump to search In physics, relativistic quantum mechanics (RQM) is any Poincaré covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c, and can accommodate massless particles. The theory has application in high energy physics,[1] particle physics and accelerator physics,[2] as well as atomic physics, chemistry[3] and condensed matter physics.[4][5] Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Heisenberg formulism) also works with special relativity. Key features common to all RQMs include: the prediction of antimatter, electron spin, spin magnetic moments of elementary spin 1/2 fermions, fine structure, and quantum dynamics of charged particles in electromagnetic fields.[6] The key result is the Dirac equation, from which these predictions emerge automatically. By contrast, in non-relativistic quantum mechanics, terms have to be introduced artificially into the Hamiltonian operator to achieve agreement with experimental observations. The most successful (and most widely used) RQM is relativistic quantum field theory (QFT), in which elementary particles are interpreted as field quanta. A unique consequence of QFT that has been tested against other RQMs is the failure of conservation of particle number, for example in matter creation and annihilation.[7] In this article, the equations are written in familiar 3D vector calculus notation and use hats for operators (not necessarily in the literature), and where space and time components can be collected, tensor index notation is shown also (frequently used in the literature), in addition the Einstein summation convention is used. SI units are used here; Gaussian units and natural units are common alternatives. All equations are in the position representation; for the momentum representation the equations have to be Fourier transformed – see position and momentum space. Combining special relativity and quantum mechanics[edit] One approach is to modify the Schrödinger picture to be consistent with special relativity.[2] A postulate of quantum mechanics is that the time evolution of any quantum system is given by the Schrödinger equation: using a suitable Hamiltonian operator Ĥ corresponding to the system. The solution is a complex-valued wavefunction ψ(r, t), a function of the 3D position vector r of the particle at time t, describing the behavior of the system. Every particle has a non-negative spin quantum number s. The number 2s is an integer, odd for fermions and even for bosons. Each s has 2s + 1 z-projection quantum numbers; σ = s, s − 1, ... , −s + 1, −s.[note 1] This is an additional discrete variable the wavefunction requires; ψ(rtσ). Historically, in the early 1920s Pauli, Kronig, Uhlenbeck and Goudsmit were the first to propose the concept of spin. The inclusion of spin in the wavefunction incorporates the Pauli exclusion principle (1925) and the more general spin-statistics theorem (1939) due to Fierz, rederived by Pauli a year later. This is the explanation for a diverse range of subatomic particle behavior and phenomena: from the electronic configurations of atoms, nuclei (and therefore all elements on the periodic table and their chemistry), to the quark configurations and colour charge (hence the properties of baryons and mesons). A fundamental prediction of special relativity is the relativistic energy–momentum relation; for a particle of rest mass m, and in a particular frame of reference with energy E and 3-momentum p with magnitude in terms of the dot product p = p • p, it is:[8] These equations are used together with the energy and momentum operators, which are respectively: to construct a relativistic wave equation (RWE): a partial differential equation consistent with the energy–momentum relation, and is solved for ψ to predict the quantum dynamics of the particle. For space and time to be placed on equal footing, as in relativity, the orders of space and time partial derivatives should be equal, and ideally as low as possible, so that no initial values of the derivatives need to be specified. This is important for probability interpretations, exemplified below. The lowest possible order of any differential equation is the first (zeroth order derivatives would not form a differential equation). The Heisenberg picture is another formulation of QM, in which case the wavefunction ψ is time-independent, and the operators A(t) contain the time dependence, governed by the equation of motion: This equation is also true in RQM, provided the Heisenberg operators are modified to be consistent with SR.[9][10] Historically, around 1926, Schrödinger and Heisenberg show that wave mechanics and matrix mechanics are equivalent, later furthered by Dirac using transformation theory. A more modern approach to RWEs, first introduced during the time RWEs were developing for particles of any spin, is to apply representations of the Lorentz group. Space and time[edit] In classical mechanics and non-relativistic QM, time is an absolute quantity all observers and particles can always agree on, "ticking away" in the background independent of space. Thus in non-relativistic QM one has for a many particle system ψ(r1, r2, r3, ..., t, σ1, σ2, σ3...). In relativistic mechanics, the spatial coordinates and coordinate time are not absolute; any two observers moving relative to each other can measure different locations and times of events. The position and time coordinates combine naturally into a four-dimensional spacetime position X = (ct, r) corresponding to events, and the energy and 3-momentum combine naturally into the four momentum P = (E/c, p) of a dynamic particle, as measured in some reference frame, change according to a Lorentz transformation as one measures in a different frame boosted and/or rotated relative the original frame in consideration. The derivative operators, and hence the energy and 3-momentum operators, are also non-invariant and change under Lorentz transformations. Under a proper orthochronous Lorentz transformation (r, t) → Λ(r, t) in Minkowski space, all one-particle quantum states ψσ locally transform under some representation D of the Lorentz group:[11] [12] where D(Λ) is a finite-dimensional representation, in other words a (2s + 1)×(2s + 1) square matrix . Again, ψ is thought of as a column vector containing components with the (2s + 1) allowed values of σ. The quantum numbers s and σ as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of σ may occur more than once depending on the representation. Non-relativistic and relativistic Hamiltonians[edit] The classical Hamiltonian for a particle in a potential is the kinetic energy p·p/2m plus the potential energy V(r, t), with the corresponding quantum operator in the Schrödinger picture: and substituting this into the above Schrödinger equation gives a non-relativistic QM equation for the wavefunction: the procedure is a straightforward substitution of a simple expression. By contrast this is not as easy in RQM; the energy–momentum equation is quadratic in energy and momentum leading to difficulties. Naively setting: is not helpful for several reasons. The square root of the operators cannot be used as it stands; it would have to be expanded in a power series before the momentum operator, raised to a power in each term, could act on ψ. As a result of the power series, the space and time derivatives are completely asymmetric: infinite-order in space derivatives but only first order in the time derivative, which is inelegant and unwieldy. Again, there is the problem of the non-invariance of the energy operator, equated to the square root which is also not invariant. Another problem, less obvious and more severe, is that it can be shown to be nonlocal and can even violate causality: if the particle is initially localized at a point r0 so that ψ(r0, t = 0) is finite and zero elsewhere, then at any later time the equation predicts delocalization ψ(r, t) ≠ 0 everywhere, even for |r| > ct which means the particle could arrive at a point before a pulse of light could. This would have to be remedied by the additional constraint ψ(|r| > ct, t) = 0.[13] There is also the problem of incorporating spin in the Hamiltonian, which isn't a prediction of the non-relativistic Schrödinger theory. Particles with spin have a corresponding spin magnetic moment quantized in units of μB, the Bohr magneton:[14][15] where g is the (spin) g-factor for the particle, and S the spin operator, so they interact with electromagnetic fields. For a particle in an externally applied magnetic field B, the interaction term[16] has to be added to the above non-relativistic Hamiltonian. On the contrary; a relativistic Hamiltonian introduces spin automatically as a requirement of enforcing the relativistic energy-momentum relation.[17] Relativistic Hamiltonians are analogous to those of non-relativistic QM in the following respect; there are terms including rest mass and interaction terms with externally applied fields, similar to the classical potential energy term, as well as momentum terms like the classical kinetic energy term. A key difference is that relativistic Hamiltonians contain spin operators in the form of matrices, in which the matrix multiplication runs over the spin index σ, so in general a relativistic Hamiltonian: is a function of space, time, and the momentum and spin operators. The Klein–Gordon and Dirac equations for free particles[edit] Substituting the energy and momentum operators directly into the energy–momentum relation may at first sight seem appealing, to obtain the Klein–Gordon equation:[18] and was discovered by many people because of the straightforward way of obtaining it, notably by Schrödinger in 1925 before he found the non-relativistic equation named after him, and by Klein and Gordon in 1927, who included electromagnetic interactions in the equation. This is relativistically invariant, yet this equation alone isn't a sufficient foundation for RQM for a few reasons; one is that negative-energy states are solutions,[2][19] another is the density (given below), and this equation as it stands is only applicable to spinless particles. This equation can be factored into the form:[20][21] where α = (α1, α2, α3) and β are not simply numbers or vectors, but 4 × 4 Hermitian matrices that are required to anticommute for ij: and square to the identity matrix: so that terms with mixed second-order derivatives cancel while the second-order derivatives purely in space and time remain. The first factor: is the Dirac equation. The other factor is also the Dirac equation, but for a particle of negative mass.[20] Each factor is relativistically invariant. The reasoning can be done the other way round: propose the Hamiltonian in the above form, as Dirac did in 1928, then pre-multiply the equation by the other factor of operators E + cα · p + βmc2, and comparison with the KG equation determines the constraints on α and β. The positive mass equation can continue to be used without loss of continuity. The matrices multiplying ψ suggest it isn't a scalar wavefunction as permitted in the KG equation, but must instead be a four-component entity. The Dirac equation still predicts negative energy solutions,[6][22] so Dirac postulated that negative energy states are always occupied, because according to the Pauli principle, electronic transitions from positive to negative energy levels in atoms would be forbidden. See Dirac sea for details. Densities and currents[edit] In non-relativistic quantum mechanics, the square-modulus of the wavefunction ψ gives the probability density function ρ = |ψ|2. This is the Copenhagen interpretation, circa 1927. In RQM, while ψ(r, t) is a wavefunction, the probability interpretation is not the same as in non-relativistic QM. Some RWEs do not predict a probability density ρ or probability current j (really meaning probability current density) because they are not positive definite functions of space and time. The Dirac equation does:[23] where the dagger denotes the Hermitian adjoint (authors usually write ψ = ψγ0 for the Dirac adjoint) and Jμ is the probability four-current, while the Klein–Gordon equation does not:[24] where μ is the four gradient. Since the initial values of both ψ and ψ/∂t may be freely chosen, the density can be negative. Instead, what appears look at first sight a "probability density" and "probability current" has to be reinterpreted as charge density and current density when multiplied by electric charge. Then, the wavefunction ψ is not a wavefunction at all, but reinterpreted as a field.[13] The density and current of electric charge always satisfy a continuity equation: as charge is a conserved quantity. Probability density and current also satisfy a continuity equation because probability is conserved, however this is only possible in the absence of interactions. Spin and electromagnetically interacting particles[edit] Including interactions in RWEs is generally difficult. Minimal coupling is a simple way to include the electromagnetic interaction. For one charged particle of electric charge q in an electromagnetic field, given by the magnetic vector potential A(r, t) defined by the magnetic field B = ∇ × A, and electric scalar potential ϕ(r, t), this is:[25] where Pμ is the four-momentum that has a corresponding 4-momentum operator, and Aμ the four-potential. In the following, the non-relativistic limit refers to the limiting cases: that is, the total energy of the particle is approximately the rest energy for small electric potentials, and the momentum is approximately the classical momentum. Spin 0[edit] In RQM, the KG equation admits the minimal coupling prescription; In the case where the charge is zero, the equation reduces trivially to the free KG equation so nonzero charge is assumed below. This is a scalar equation that is invariant under the irreducible one-dimensional scalar (0,0) representation of the Lorentz group. This means that all of its solutions will belong to a direct sum of (0,0) representations. Solutions that do not belong to the irreducible (0,0) representation will have two or more independent components. Such solutions cannot in general describe particles with nonzero spin since spin components are not independent. Other constraint will have to be imposed for that, e.g. the Dirac equation for spin 1/2, see below. Thus if a system satisfies the KG equation only, it can only be interpreted as a system with zero spin. The electromagnetic field is treated classically according to Maxwell's equations and the particle is described by a wavefunction, the solution to the KG equation. The equation is, as it stands, not always very useful, because massive spinless particles, such as the π-mesons, experience the much stronger strong interaction in addition to the electromagnetic interaction. It does, however, correctly describe charged spinless bosons in the absence of other interactions. The KG equation is applicable to spinless charged bosons in an external electromagnetic potential.[2] As such, the equation cannot be applied to the description of atoms, since the electron is a spin 1/2 particle. In the non-relativistic limit the equation reduces to the Schrödinger equation for a spinless charged particle in an electromagnetic field:[16] Spin 1/2[edit] Non relativistically, spin was phenomenologically introduced in the Pauli equation by Pauli in 1927 for particles in an electromagnetic field: by means of the 2 × 2 Pauli matrices, and ψ is not just a scalar wavefunction as in the non-relativistic Schrödinger equation, but a two-component spinor field: where the subscripts ↑ and ↓ refer to the "spin up" (σ = +1/2) and "spin down" (σ = −1/2) states.[note 2] In RQM, the Dirac equation can also incorporate minimal coupling, rewritten from above; and was the first equation to accurately predict spin, a consequence of the 4 × 4 gamma matrices γ0 = β, γ = (γ1, γ2, γ3) = βα = (βα1, βα2, βα3). There is a 4 × 4 identity matrix pre-multiplying the energy operator (including the potential energy term), conventionally not written for simplicity and clarity (i.e. treated like the number 1). Here ψ is a four-component spinor field, which is conventionally split into two two-component spinors in the form:[note 3] The 2-spinor ψ+ corresponds to a particle with 4-momentum (E, p) and charge q and two spin states (σ = ±1/2, as before). The other 2-spinor ψ corresponds to a similar particle with the same mass and spin states, but negative 4-momentum −(E, p) and negative charge q, that is, negative energy states, time-reversed momentum, and negated charge. This was the first interpretation and prediction of a particle and corresponding antiparticle. See Dirac spinor and bispinor for further description of these spinors. In the non-relativistic limit the Dirac equation reduces to the Pauli equation (see Dirac equation for how). When applied a one-electron atom or ion, setting A = 0 and ϕ to the appropriate electrostatic potential, additional relativistic terms include the spin-orbit interaction, electron gyromagnetic ratio, and Darwin term. In ordinary QM these terms have to be put in by hand and treated using perturbation theory. The positive energies do account accurately for the fine structure. Within RQM, for massless particles the Dirac equation reduces to: the first of which is the Weyl equation, a considerable simplification applicable for massless neutrinos.[26] This time there is a 2 × 2 identity matrix pre-multiplying the energy operator conventionally not written. In RQM it is useful to take this as the zeroth Pauli matrix σ0 which couples to the energy operator (time derivative), just as the other three matrices couple to the momentum operator (spatial derivatives). The Pauli and gamma matrices were introduced here, in theoretical physics, rather than pure mathematics itself. They have applications to quaternions and to the SO(2) and SO(3) Lie groups, because they satisfy the important commutator [ , ] and anticommutator [ , ]+ relations respectively: where εabc is the three-dimensional Levi-Civita symbol. The gamma matrices form bases in Clifford algebra, and have a connection to the components of the flat spacetime Minkowski metric ηαβ in the anticommutation relation: (This can be extended to curved spacetime by introducing vierbeins, but is not the subject of special relativity). In 1929, the Breit equation was found to describe two or more electromagnetically interacting massive spin 1/2 fermions to first-order relativistic corrections; one of the first attempts to describe such a relativistic quantum many-particle system. This is, however, still only an approximation, and the Hamiltonian includes numerous long and complicated sums. Helicity and chirality[edit] The helicity operator is defined by; where p is the momentum operator, S the spin operator for a particle of spin s, E is the total energy of the particle, and m0 its rest mass. Helicity indicates the orientations of the spin and translational momentum vectors.[27] Helicity is frame-dependent because of the 3-momentum in the definition, and is quantized due to spin quantization, which has discrete positive values for parallel alignment, and negative values for antiparallel alignment. A automatic occurrence in the Dirac equation (and the Weyl equation) is the projection of the spin 1/2 operator on the 3-momentum (times c), σ · c p, which is the helicity (for the spin 1/2 case) times . For massless particles the helicity simplifies to: Higher spins[edit] The Dirac equation can only describe particles of spin 1/2. Beyond the Dirac equation, RWEs have been applied to free particles of various spins. In 1936, Dirac extended his equation to all fermions, three years later Fierz and Pauli rederived the same equation.[28] The Bargmann–Wigner equations were found in 1948 using Lorentz group theory, applicable for all free particles with any spin.[29][30] Considering the factorization of the KG equation above, and more rigorously by Lorentz group theory, it becomes apparent to introduce spin in the form of matrices. The wavefunctions are multicomponent spinor fields, which can be represented as column vectors of functions of space and time: where the expression on the right is the Hermitian conjugate. For a massive particle of spin s, there are 2s + 1 components for the particle, and another 2s + 1 for the corresponding antiparticle (there are 2s + 1 possible σ values in each case), altogether forming a 2(2s + 1)-component spinor field: with the + subscript indicating the particle and − subscript for the antiparticle. However, for massless particles of spin s, there are only ever two-component spinor fields; one is for the particle in one helicity state corresponding to +s and the other for the antiparticle in the opposite helicity state corresponding to −s: According to the relativistic energy-momentum relation, all massless particles travel at the speed of light, so particles traveling at the speed of light are also described by two-component spinors. Historically, Élie Cartan found the most general form of spinors in 1913, prior to the spinors revealed in the RWEs following the year 1927. For equations describing higher-spin particles, the inclusion of interactions is nowhere near as simple minimal coupling, they lead to incorrect predictions and self-inconsistencies.[31] For spin greater than ħ/2, the RWE is not fixed by the particle's mass, spin, and electric charge; the electromagnetic moments (electric dipole moments and magnetic dipole moments) allowed by the spin quantum number are arbitrary. (Theoretically, magnetic charge would contribute also). For example, the spin 1/2 case only allows a magnetic dipole, but for spin 1 particles magnetic quadrupoles and electric dipoles are also possible.[26] For more on this topic, see multipole expansion and (for example) Cédric Lorcé (2009).[32][33] Velocity operator[edit] The Schrödinger/Pauli velocity operator can be defined for a massive particle using the classical definition p = m v, and substituting quantum operators in the usual way:[34] which has eigenvalues that take any value. In RQM, the Dirac theory, it is: which must have eigenvalues between ±c. See Foldy–Wouthuysen transformation for more theoretical background. Relativistic quantum Lagrangians[edit] The Hamiltonian operators in the Schrödinger picture are one approach to forming the differential equations for ψ. An equivalent alternative is to determine a Lagrangian (really meaning Lagrangian density), then generate the differential equation by the field-theoretic Euler–Lagrange equation: For some RWEs, a Lagrangian can be found by inspection. For example, the Dirac Lagrangian is:[35] and Klein–Gordon Lagrangian is: This is not possible for all RWEs; and is one reason the Lorentz group theoretic approach is important and appealing: fundamental invariance and symmetries in space and time can be used to derive RWEs using appropriate group representations. The Lagrangian approach with field interpretation of ψ is the subject of QFT rather than RQM: Feynman's path integral formulation uses invariant Lagrangians rather than Hamiltonian operators, since the latter can become extremely complicated, see (for example) S. Weinberg (1995).[36] Relativistic quantum angular momentum[edit] In non-relativistic QM, the angular momentum operator is formed from the classical pseudovector definition L = r × p. In RQM, the position and momentum operators are inserted directly where they appear in the orbital relativistic angular momentum tensor defined from the four-dimensional position and momentum of the particle, equivalently a bivector in the exterior algebra formalism:[37] which are six components altogether: three are the non-relativistic 3-orbital angular momenta; M12 = L3, M23 = L1, M31 = L2, and the other three M01, M02, M03 are boosts of the centre of mass of the rotating object. An additional relativistic-quantum term has to be added for particles with spin. For a particle of rest mass m, the total angular momentum tensor is: where the star denotes the Hodge dual, and is the Pauli–Lubanski pseudovector.[38] For more on relativistic spin, see (for example) S.M. Troshin and N.E. Tyurin (1994).[39] Thomas precession and spin-orbit interactions[edit] In 1926 the Thomas precession is discovered: relativistic corrections to the spin of elementary particles with application in the spin–orbit interaction of atoms and rotation of macroscopic objects.[40][41] In 1939 Wigner derived the Thomas precession. In classical electromagnetism and special relativity, an electron moving with a velocity v through an electric field E but not a magnetic field B, will in its own frame of reference experience a Lorentz-transformed magnetic field B′: In the non-relativistic limit v << c: so the non-relativistic spin interaction Hamiltonian becomes:[42] where the first term is already the non-relativistic magnetic moment interaction, and the second term the relativistic correction of order (v/c, but this disagrees with experimental atomic spectra by a factor of ​12. It was pointed out by L. Thomas that there is a second relativistic effect: An electric field component perpendicular to the electron velocity causes an additional acceleration of the electron perpendicular to its instantaneous velocity, so the electron moves in a curved path. The electron moves in a rotating frame of reference, and this additional precession of the electron is called the Thomas precession. It can be shown[43] that the net result of this effect is that the spin–orbit interaction is reduced by half, as if the magnetic field experienced by the electron has only one-half the value, and the relativistic correction in the Hamiltonian is: In the case of RQM, the factor of ​12 is predicted by the Dirac equation.[42] The events which led to and established RQM, and the continuation beyond into quantum electrodynamics (QED), are summarized below [see, for example, R. Resnick and R. Eisberg (1985),[44] and P.W Atkins (1974)[45]]. More than half a century of experimental and theoretical research from the 1890s through to the 1950s in the new and mysterious quantum theory as it was up and coming revealed that a number of phenomena cannot be explained by QM alone. SR, found at the turn of the 20th century, was found to be a necessary component, leading to unification: RQM. Theoretical predictions and experiments mainly focused on the newly found atomic physics, nuclear physics, and particle physics; by considering spectroscopy, diffraction and scattering of particles, and the electrons and nuclei within atoms and molecules. Numerous results are attributed to the effects of spin. Relativistic description of particles in quantum phenomena[edit] Einstein in 1905 explained of the photoelectric effect; a particle description of light as photons. In 1916, Sommerfeld explains fine structure; the splitting of the spectral lines of atoms due to first order relativistic corrections. The Compton effect of 1923 provided more evidence that special relativity does apply; in this case to a particle description of photon–electron scattering. de Broglie extends wave–particle duality to matter: the de Broglie relations, which are consistent with special relativity and quantum mechanics. By 1927, Davisson and Germer and separately G. Thomson successfully diffract electrons, providing experimental evidence of wave-particle duality. Quantum non-locality and relativistic locality[edit] In 1935; Einstein, Rosen, Podolsky published a paper[48] concerning quantum entanglement of particles, questioning quantum nonlocality and the apparent violation of causality upheld in SR: particles can appear to interact instantaneously at arbitrary distances. This was a misconception since information is not and cannot be transferred in the entangled states; rather the information transmission is in the process of measurement by two observers (one observer has to send a signal to the other, which cannot exceed c). QM does not violate SR.[49][50] In 1959, Bohm and Aharonov publish a paper[51] on the Aharonov–Bohm effect, questioning the status of electromagnetic potentials in QM. The EM field tensor and EM 4-potential formulations are both applicable in SR, but in QM the potentials enter the Hamiltonian (see above) and influence the motion of charged particles even in regions where the fields are zero. In 1964, Bell's theorem was published in a paper on the EPR paradox,[52] showing that QM cannot be derived from local hidden variable theories if locality is to be maintained. The Lamb shift[edit] In 1947 the Lamb shift was discovered: a small difference in the 2S12 and 2P12 levels of hydrogen, due to the interaction between the electron and vacuum. Lamb and Retherford experimentally measure stimulated radio-frequency transitions the 2S12 and 2P12 hydrogen levels by microwave radiation.[53] An explanation of the Lamb shift is presented by Bethe. Papers on the effect were published in the early 1950s.[54] Development of quantum electrodynamics[edit] See also[edit] 1. ^ Other common notations include ms and sz etc., but this would clutter expressions with unnecessary subscripts. The subscripts σ labeling spin values are not to be confused for tensor indices nor the Pauli matrices. 2. ^ This spinor notation is not necessarily standard; the literature usually writes or etc., but in the context of spin 1/2, this informal identification is commonly made. 3. ^ Again this notation is not necessarily standard, the more advanced literature usually writes but here we show informally the correspondence of energy, helicity, and spin states. 1. ^ D.H. Perkins (2000). Introduction to High Energy Physics. Cambridge University Press. ISBN 978-0-521-62196-0. 2. ^ a b c d B. R. Martin, G.Shaw. Particle Physics. Manchester Physics Series (3rd ed.). John Wiley & Sons. p. 3. ISBN 978-0-470-03294-7. 3. ^ M.Reiher, A.Wolf (2009). Relativistic Quantum Chemistry. John Wiley & Sons. ISBN 978-3-527-62749-3. 4. ^ P. Strange (1998). Relativistic Quantum Mechanics: With Applications in Condensed Matter and Atomic Physics. Cambridge University Press. ISBN 978-0-521-56583-7. 5. ^ P. Mohn (2003). Magnetism in the Solid State: An Introduction. Springer Series in Solid-State Sciences Series. 134. Springer. p. 6. ISBN 978-3-540-43183-1. 6. ^ a b B. R. Martin, G.Shaw. Particle Physics. Manchester Physics Series (3rd ed.). John Wiley & Sons. pp. 5–6. ISBN 978-0-470-03294-7. 7. ^ A. Messiah (1981). Quantum Mechanics. 2. North-Holland Publishing Company. p. 875. ISBN 978-0-7204-0045-8. 8. ^ J.R. Forshaw; A.G. Smith (2009). Dynamics and Relativity. Manchester Physics Series. John Wiley & Sons. pp. 258–259. ISBN 978-0-470-01460-8. 9. ^ W. Greiner (2000). Relativistic Quantum Mechanics. Wave Equations (3rd ed.). Springer. p. 70. ISBN 978-3-540-67457-3. 10. ^ A. Wachter (2011). "Relativistic quantum mechanics". Springer. p. 34. ISBN 90-481-3645-8. 11. ^ Weinberg, S. (1964). "Feynman Rules for Any spin" (PDF). Phys. Rev. 133 (5B): B1318–B1332. Bibcode:1964PhRv..133.1318W. doi:10.1103/PhysRev.133.B1318.; Weinberg, S. (1964). "Feynman Rules for Any spin. II. Massless Particles" (PDF). Phys. Rev. 134 (4B): B882–B896. Bibcode:1964PhRv..134..882W. doi:10.1103/PhysRev.134.B882.; Weinberg, S. (1969). "Feynman Rules for Any spin. III" (PDF). Phys. Rev. 181 (5): 1893–1899. Bibcode:1969PhRv..181.1893W. doi:10.1103/PhysRev.181.1893. 12. ^ K. Masakatsu (2012). "Superradiance Problem of Bosons and Fermions for Rotating Black Holes in Bargmann–Wigner Formulation". arXiv:1208.0644 [gr-qc]. 13. ^ a b C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. pp. 1193–1194. ISBN 978-0-07-051400-3. 14. ^ R. Resnick; R. Eisberg (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). John Wiley & Sons. p. 274. ISBN 978-0-471-87373-0. 15. ^ L.D. Landau; E.M. Lifshitz (1981). Quantum Mechanics Non-Relativistic Theory. 3. Elsevier. p. 455. ISBN 978-0-08-050348-6. 16. ^ a b Y. Peleg; R. Pnini; E. Zaarur; E. Hecht (2010). Quantum Mechanics. Shaum's outlines (2nd ed.). McGraw–Hill. p. 181. ISBN 978-0-07-162358-2. 17. ^ E. Abers (2004). Quantum Mechanics. Addison Wesley. p. 425. ISBN 978-0-13-146100-0. 18. ^ A. Wachter (2011). "Relativistic quantum mechanics". Springer. p. 5. ISBN 90-481-3645-8. 19. ^ E. Abers (2004). Quantum Mechanics. Addison Wesley. p. 415. ISBN 978-0-13-146100-0. 20. ^ a b R. Penrose (2005). The Road to Reality. Vintage Books. pp. 620–621. ISBN 978-0-09-944068-0. 21. ^ Bransden, BH; Joachain, CJ (1983). Physics of Atoms and Molecules (1st ed.). Prentice Hall. p. 634. ISBN 978-0-582-44401-0. 22. ^ W.T. Grandy (1991). Relativistic quantum mechanics of leptons and fields. Springer. p. 54. ISBN 978-0-7923-1049-5. 23. ^ E. Abers (2004). Quantum Mechanics. Addison Wesley. p. 423. ISBN 978-0-13-146100-0. 24. ^ D. McMahon (2008). Quantum Field Theory. Demystified. McGraw Hill. p. 114. ISBN 978-0-07-154382-8. 25. ^ Bransden, BH; Joachain, CJ (1983). Physics of Atoms and Molecules (1st ed.). Prentice Hall. pp. 632–635. ISBN 978-0-582-44401-0. 26. ^ a b C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. p. 1194. ISBN 978-0-07-051400-3.. 27. ^ P. Labelle (2010). Supersymmetry. Demystified. McGraw-Hill. ISBN 978-0-07-163641-4. 28. ^ S. Esposito (2011). "Searching for an equation: Dirac, Majorana and the others". Annals of Physics. 327 (6): 1617–1644. arXiv:1110.6878. Bibcode:2012AnPhy.327.1617E. doi:10.1016/j.aop.2012.02.016. 29. ^ Bargmann, V.; Wigner, E. P. (1948). "Group theoretical discussion of relativistic wave equations" (PDF). Proc. Natl. Acad. Sci. U.S.A. 34 (5): 211–23. Bibcode:1948PNAS...34..211B. doi:10.1073/pnas.34.5.211. PMC 1079095. PMID 16578292. 30. ^ E. Wigner (1937). "On Unitary Representations Of The Inhomogeneous Lorentz Group" (PDF). Annals of Mathematics. 40 (1): 149–204. Bibcode:1939AnMat..40..149W. doi:10.2307/1968551. JSTOR 1968551. 31. ^ T. Jaroszewicz; P.S Kurzepa (1992). "Geometry of spacetime propagation of spinning particles". Annals of Physics. 216 (2): 226–267. Bibcode:1992AnPhy.216..226J. doi:10.1016/0003-4916(92)90176-M. 32. ^ Cédric Lorcé (2009). "Electromagnetic Properties for Arbitrary Spin Particles: Part 1 − Electromagnetic Current and Multipole Decomposition". arXiv:0901.4199 [hep-ph]. 33. ^ Cédric Lorcé (2009). "Electromagnetic Properties for Arbitrary Spin Particles: Part 2 − Natural Moments and Transverse Charge Densities". Physical Review D. 79 (11): 113011. arXiv:0901.4200. doi:10.1103/PhysRevD.79.113011. 34. ^ P. Strange (1998). Relativistic Quantum Mechanics: With Applications in Condensed Matter and Atomic Physics. Cambridge University Press. p. 206. ISBN 978-0-521-56583-7. 35. ^ P. Labelle (2010). Supersymmetry. Demystified. McGraw-Hill. p. 14. ISBN 978-0-07-163641-4. 36. ^ S. Weinberg (1995). The Quantum Theory of Fields. 1. Cambridge University Press. ISBN 978-0-521-55001-7. 37. ^ R. Penrose (2005). The Road to Reality. Vintage Books. pp. 437, 566–569. ISBN 978-0-09-944068-0. Note: Some authors, including Penrose, use Latin letters in this definition, even though it is conventional to use Greek indices for vectors and tensors in spacetime. 38. ^ L.H. Ryder (1996). Quantum Field Theory (2nd ed.). Cambridge University Press. p. 62. ISBN 978-0-521-47814-4. 39. ^ S.M. Troshin; N.E. Tyurin (1994). Spin phenomena in particle interactions. World Scientific. ISBN 978-981-02-1692-4. 40. ^ C.W. Misner, K.S. Thorne, J.A. Wheeler (1973-09-15). Gravitation. p. 1146. ISBN 978-0-7167-0344-0.CS1 maint: Multiple names: authors list (link) 41. ^ I. Ciufolini; R.R.A. Matzner (2010). General relativity and John Archibald Wheeler. Springer. p. 329. ISBN 978-90-481-3735-0. 42. ^ a b H. Kroemer (2003). "The Thomas precession factor in spin–orbit interaction" (PDF). American Journal of Physics. 72 (1): 51–52. arXiv:physics/0310016. Bibcode:2004AmJPh..72...51K. doi:10.1119/1.1615526. 43. ^ Jackson, J. D. (1999). Classical Electrodynamics (3rd ed.). Wiley. p. 548. ISBN 978-0-471-30932-1. 44. ^ R. Resnick; R. Eisberg (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). John Wiley & Sons. pp. 57, 114–116, 125–126, 272. ISBN 978-0-471-87373-0. 45. ^ P.W. Atkins (1974). Quanta: A handbook of concepts. Oxford University Press. pp. 168–169, 176, 263, 228. ISBN 978-0-19-855493-6. 46. ^ K.S. Krane (1988). Introductory Nuclear Physics. John Wiley & Sons. pp. 396–405. ISBN 978-0-471-80553-3. 47. ^ K.S. Krane (1988). Introductory Nuclear Physics. John Wiley & Sons. pp. 361–370. ISBN 978-0-471-80553-3. 48. ^ A. Einstein; B. Podolsky; N. Rosen (1935). "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?". Phys. Rev. 47 (10): 777–780. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777. 49. ^ E. Abers (2004). Quantum Mechanics. Addison Wesley. p. 192. ISBN 978-0-13-146100-0. 50. ^ R. Penrose (2005). The Road to Reality. Vintage Books. ISBN 978-0-09-944068-0. Chapter 23: The entangled quantum world 51. ^ Y. Aharonov; D. Bohm (1959). "Significance of electromagnetic potentials in quantum theory". Physical Review. 115 (3): 485–491. Bibcode:1959PhRv..115..485A. doi:10.1103/PhysRev.115.485. 52. ^ Bell, John (1964). "On the Einstein Podolsky Rosen Paradox" (PDF). Physics. 1 (3): 195–200. doi:10.1103/PhysicsPhysiqueFizika.1.195. 53. ^ Lamb, Willis E.; Retherford, Robert C. (1947). "Fine Structure of the Hydrogen Atom by a Microwave Method". Physical Review. 72 (3): 241–243. Bibcode:1947PhRv...72..241L. doi:10.1103/PhysRev.72.241. 54. ^ W.E. Lamb, Jr. & R.C. Retherford (1950). "Fine Structure of the Hydrogen Atom. Part I". Phys. Rev. 79 (4): 549–572. Bibcode:1950PhRv...79..549L. doi:10.1103/PhysRev.79.549. W.E. Lamb, Jr. & R.C. Retherford (1951). "Fine Structure of the Hydrogen Atom. Part II". Phys. Rev. 81 (2): 222–232. Bibcode:1951PhRv...81..222L. doi:10.1103/PhysRev.81.222.W.E. Lamb, Jr. (1952). "Fine Structure of the Hydrogen Atom. III". Phys. Rev. 85 (2): 259–276. Bibcode:1952PhRv...85..259L. doi:10.1103/PhysRev.85.259. W.E. Lamb, Jr. & R.C. Retherford (1952). "Fine Structure of the Hydrogen Atom. IV". Phys. Rev. 86 (6): 1014–1022. Bibcode:1952PhRv...86.1014L. doi:10.1103/PhysRev.86.1014. S. Triebwasser; E.S. Dayhoff & W.E. Lamb, Jr. (1953). "Fine Structure of the Hydrogen Atom. V". Phys. Rev. 89 (1): 98–106. Bibcode:1953PhRv...89...98T. doi:10.1103/PhysRev.89.98. Selected books[edit] Group theory in quantum physics[edit] Selected papers[edit] Further reading[edit] Relativistic quantum mechanics and field theory[edit] Quantum theory and applications in general[edit] External links[edit]
58ad60f64a30eec4
We gratefully acknowledge support from the Simons Foundation and member institutions. Authors and titles for Mar 2014 [ total of 2796 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 2776-2796 ] [ showing 25 entries per page: fewer | more ] [1]  arXiv:1403.0012 [pdf, ps, other] Subjects: Information Theory (cs.IT) [2]  arXiv:1403.0020 [pdf, ps, other] Title: Topos Semantics for Higher-Order Modal Logic Journal-ref: Logique & Analyse vol 57, no 228 (2014), 591--636 Subjects: Logic (math.LO); Category Theory (math.CT) [3]  arXiv:1403.0021 [pdf, ps, other] Title: Frobenius manifolds and Frobenius algebra-valued integrable systems Comments: We have removed section 4 of version 1 of this paper. This material will be moved to a new paper entitled "Integrability of the Frobenius algebra-valued KP hierarchy" which is an improved version of the paper arXiv:1401.2216v1. For the current paper, we have added two new sections to discuss "$\mathcal{A}$-valued TQFT" and "$\mathcal{A}$-valued dispersive integrable systems Subjects: Mathematical Physics (math-ph); Differential Geometry (math.DG); Exactly Solvable and Integrable Systems (nlin.SI) [4]  arXiv:1403.0022 [pdf, other] Title: Noise prevents infinite stretching of the passive field in a stochastic vector advection equation Comments: 23 pages, 4 figures Subjects: Probability (math.PR); Mathematical Physics (math-ph); Analysis of PDEs (math.AP) [5]  arXiv:1403.0023 [pdf, ps, other] Title: Superspecial rank of supersingular abelian varieties and Jacobians Comments: V2: New coauthor, major rewrite [6]  arXiv:1403.0026 [pdf, ps, other] Title: Commensurations and Metric Properties of Houghton's Groups Journal-ref: Pacific J. Math. 285 (2016) 289-301 Subjects: Group Theory (math.GR) [7]  arXiv:1403.0027 [pdf, ps, other] Title: The Frobenius-Virasoro algebra and Euler equations Authors: Dafeng Zuo Comments: Comments are welcome Journal-ref: Journal of Geometry and Physics 86(2014)203--210 Subjects: Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI) [8]  arXiv:1403.0028 [pdf, ps, other] Title: Connections of Zero Curvature and Applications to Nonlinear Partial Differential Equations Authors: Paul Bracken Comments: 22 Journal-ref: Discrete and Continuous Dynamical Systems, Series S, 7, 6, 1165-1179, (2014) Subjects: Differential Geometry (math.DG) [9]  arXiv:1403.0039 [pdf, ps, other] Title: Canonical bases in tensor products revisited Comments: 7 pages, v2, improved exposition, one reference added, to appear in Amer. J. Math Journal-ref: Amer. J. Math. 138 (2016), 1731-1738 Subjects: Representation Theory (math.RT) [10]  arXiv:1403.0041 [pdf, ps, other] Title: Individual dynamics induces symmetry in network controllability Comments: 5 pages, 3 figures [11]  arXiv:1403.0042 [pdf, ps, other] Title: Infinitely many solutions to a fractional nonlinear Schrödinger equation Comments: arXiv admin note: text overlap with arXiv:1307.2301 by other authors Subjects: Analysis of PDEs (math.AP) [12]  arXiv:1403.0045 [pdf, ps, other] Title: Polyhedra, Complexes, Nets and Symmetry Authors: Egon Schulte Comments: Acta Crystallographica Section A (to appear) Subjects: Metric Geometry (math.MG); Combinatorics (math.CO) [13]  arXiv:1403.0046 [pdf, other] Title: Well-posedness and Robust Preconditioners for the Discretized Fluid-Structure Interaction Systems Authors: Jinchao Xu, Kai Yang Comments: 1. Added two preconditioners into the analysis and implementation 2. Rerun all the numerical tests 3. changed title, abstract and corrected lots of typos and inconsistencies 4. added references Journal-ref: Computer Methods in Applied Mechanics and Engineering 292 (2015): 69-91 Subjects: Numerical Analysis (math.NA) [14]  arXiv:1403.0053 [pdf, ps, other] Title: Bootstrapping and Askey-Wilson polynomials Comments: 17 pages, no figures Subjects: Classical Analysis and ODEs (math.CA); Combinatorics (math.CO) [15]  arXiv:1403.0054 [pdf, other] Subjects: Information Theory (cs.IT) [16]  arXiv:1403.0060 [pdf, ps, other] Title: Regression analysis in quantum language Authors: Shiro Ishikawa Comments: arXiv admin note: text overlap with arXiv:1402.0606, arXiv:1401.2709, arXiv:1312.6757 Subjects: Statistics Theory (math.ST) [17]  arXiv:1403.0063 [pdf, ps, other] Title: Restricted Kac modules of Hamiltonian Lie superalgebras of odd type Authors: Jixia Yuan, Wende Liu Comments: 13 pages Journal-ref: Monatsh. Math. 178(2015) 473-488 Subjects: Representation Theory (math.RT) [18]  arXiv:1403.0070 [pdf, ps, other] Title: Equidistribution of saddle periodic points for Henon-type automorphisms of C^k Comments: 49 pages [19]  arXiv:1403.0075 [pdf, ps, other] Title: Singularity of the varieties of representations of lattices in solvable Lie groups Authors: Hisashi Kasuya Comments: 11 pages. To appear in J. Topol. Anal Subjects: Group Theory (math.GR); Algebraic Geometry (math.AG); Complex Variables (math.CV); Geometric Topology (math.GT) [20]  arXiv:1403.0076 [pdf, ps, other] Title: Unavoidable collections of balls for processes with isotropic unimodal Green function Authors: Wolfhard Hansen Subjects: Analysis of PDEs (math.AP); Probability (math.PR) [21]  arXiv:1403.0078 [pdf, ps, other] Title: A note on bi-linear multipliers Subjects: Classical Analysis and ODEs (math.CA) [22]  arXiv:1403.0079 [pdf, ps, other] Title: An extension of Herglotz's theorem to the quaternions Comments: to appear in Journal of Mathematical Analysis and Applications 2014 Subjects: Functional Analysis (math.FA) [23]  arXiv:1403.0088 [pdf, ps, other] Title: Union-intersecting set systems Comments: 9 pages Subjects: Combinatorics (math.CO) [24]  arXiv:1403.0089 [pdf, ps, other] Title: Factorization Property of Generalized s-self-decomposable measures and class $L^f$ distributions Journal-ref: Theory Probab. Appl. 55, No 4,(2011), pp. 692-698; and Teor. Verojatn. Primenen. 55 no 4 (2010), pp 8-12-819 Subjects: Probability (math.PR) [25]  arXiv:1403.0094 [pdf, ps, other] Title: Asymptotics of eigenstates of elliptic problems with mixed boundary data on domains tending to infinity Comments: Asymptotic Analysis, 2013 Subjects: Analysis of PDEs (math.AP) [ showing 25 entries per page: fewer | more ] Disable MathJax (What is MathJax?)
fabe20b2be5f8d34
The Unabashed Academic 07 September 2016 Could dark matter be super cold neutrinos? Probably the greatest physics problems of the current generation are the cosmological questions. Thanks to the development of powerful new telescopes (many of them in space) in the last years of the twentieth century, startling new and unexpected results have pointed the way to new physics. These currently go under the names of "dark matter" and "dark energy", but those aren't real descriptions; rather they are suggestions for what might provide theoretical solutions to experimental anomalies. And, as naming often does, they guide our thinking into explorations of how to come up with new physics. The problem that "dark matter" is supposed to resolve began in the 1970s with the observations of Vera Rubin. By making a careful analysis of the motion of stars in galaxies, she found an unexpected anomaly. As any first year physics student can tell you, Newton's law of gravitation tells you how planets orbit around the sun. The mass of the sun draws the planets towards it, bending their velocities ever inward in (nearly) circular orbits. The mathematical form of the law produces a connection between the distance the planets are from the sun and the speed (and therefore the period) of the planets. That connection was known empirically before Newton to Kepler (Kepler'sthird law of planetary motion: the cube of the distance from the sun is proportional to the square of the planet's period). The fact that Newton's laws of motion together with his law of gravity explained that result was considered a convincing proof of Newton's theories. A galaxy has a structure somewhat like that of a solar system. There is a heavy object in the center – a massive black hole – that is responsible for most of the motion of the stars in the galaxy. Rubin found that the speed of the stars around the center didn't follow Kepler's law. The far out stars were going too fast. This suggested that there was an unseen distributed mass that we didn't know about (or that Newton's law of gravity perhaps failed at long distances; In my opinion this option has not received enough attention, though that's for another post.). Observations in the past thirty years have increasingly supported the idea that there is some extra matter that we can't see – and a lot of it. More than the matter that we do see. As a result, a growing number of physicists are exploring what might be causing this. I saw a lovely colloquium yesterday about one such search. Carter Hall, one of my colleagues in the University of Maryland Physics Department, spoke about the LUX experiment. This explores the possibility that there is a weakly interactive massive particle (a "WIMP") that we don't know about – one that doesn't interact with other particles electromagnetically so it doesn't give off or absorb light, and it doesn't interact strongly (with the nuclear force) so it doesn't create pions or other particles that would be easily detectable in one of our accelerators. This would make it very difficult to detect. The experiment was a tour de force, looking for possible interactions of a WIMP with a heavy nucleus – Xenon. (The interaction probability goes up like the square of the nuclear mass so a heavy nucleus is much more likely to show a result.) The experiment was incredibly careful, ruling out all possible known signals. It found no results but was able to rule out many possible theories and a broad swath of the parameter space – eliminating many possible masses and interaction strengths. An excellent experiment. But as I listened to this beautiful lecture, I wondered whether the whole community exploring this problem hadn't made the mistake of looking under the lamppost for our lost car keys. It's sort of wishful thinking to assume that the solution to our problem might be exactly the kind of particle that would be detectable with the incredibly large, powerful, and expensive tools that we have built – particle accelerators. These are designed to allow us to find new physics – in the paradigm we have been exploring for nearly a century: finding new sub-nuclear particles and determining their interactions in the framework of quantum field theory. This reflects a discussion my friend Royce Zia and I have been having for five decades. Royce an I met in undergraduate school (at Princeton) and then became fast friends in grad school (at MIT). We spent many hours there (and since) arguing about deep issues in physics. We both started out assuming we wanted to be elementary particle theorists. That, after all, was where the action was. Quarks had just been proposed and there was lots of interest in the nuclear force and how to make sense of all the particles that were being produced in accelerators. But we were both transformed by a class in Many Body Quantum Theory given by Petros Argyres, a condensed matter theorist. In this class we saw many (non-relativistic) examples of emergent phenomena – places where you knew the basic laws and particles, but couldn't easily see important results and structures from those basic laws. It took deep theoretical creativity and insight to find a new way of looking at and rearranging those laws so that the phenomena emerged in a natural way. There are many such examples. The basic laws and particles of atomic and molecular physics were well known at the time. Atoms and molecules are made up of electrons and nuclei (the structure of the nuclei is irrelevant for this physics – only their charge and mass matters) and they are well described by the non-relativistic Schrödinger equation. But once you had many particles – like in a large atom, or a crystal of a metal – there were far too many equations to do anything useful with. Some insight was needed as to how to rearrange those equations so that there was a much simpler starting point. Three examples of this are the shell model of the atom (the basis of all of chemistry), plasmon oscillations in a metal (coherent vibrations of all the valence electrons in a metal together), and superconductivity (the vanishing of electrical resistance in metals at very low temperatures). Each of these were well described by little pieces of the known theory arranged in clever and insightful ways – ways that the original equations gave no obvious hint of in their structure. I was deeply impressed by this insight and decided that this extracting or explaining phenomena from new treatments of known physics was just as important – as just as fundamental – as the discovery of new particles or new physical laws. Royce and I argued this for many hours and finally decided to grant both approaches the title of "fundamental physics" – but we decided they were different enough to separate them. So we called the particle physics approach "fundamental-sub-one" and the many-body physics approach "fundamental-sub-two". (Interestingly, both Royce and I went on to pursue physics careers in the f2 area, he in statistical physics, me in nuclear reaction theory.) In the decades since we had these arguments, physics has made huge progress in f2 physics – from phase transition theory to the understanding and creation of exotic (and commercially important) excitations of many body systems. So yesterday, I brought my f2 perspective to listening to Carter talk about dark matter and I wondered: He was talking all about f1 type solutions. Interesting and important, but could there also be an f2 type solution? We already know about weakly interacting massive particles: neutrinos. They only interact via gravity and the weak nuclear force, not electromagnetically or strongly.  Could dark matter simply be a lot of cold neutrinos? They would have to be very cold – travelling at a slow speed – or else they would evaporate. When we make them in nuclear reactions in accelerators they are typically highly relativistic – travelling at essentially the speed of light. The gravity of the galaxy wouldn't be strong enough to hold them. That leads to a potential problem for this model. Whatever dark matter is, it has to have been made fairly soon after the big bang – when the universe was very dense, very uniform, and very hot -- hot enough to generate lots of particles (mass) from energy. (Why we believe this is too long a story to go into here.) So you would expect that any neutrinos that were made then would be hot – going too fast to become cold dark matter. But suppose there were some unknown emergent mechanism in that hot dense universe -- a phase transition -- that squeezed out a cold cloud of neutrinos. Neutrinos interact with matter very weakly – and their interaction strength is proportional to their energy so cold neutrinos interact even more weakly than fast neutrinos. If there were a mechanism that spewed out lots of cold neutrinos, I expect they would interact too weakly with the rest of the matter to come to thermal equilibrium. If the equilibration time were, say, a trillion years, they would stay cold and, if their density were right, could serve as our "dark matter". Most of the experimental dark matter searches wouldn't find these cold neutrinos. Searching for them at this point would have to be a theoretical exploration: Can we find a mechanism in hot baryonic matter that will produce a phase transition that spews out lots of cold neutrinos? I don't know of any such mechanism or where to start, but wouldn't it be fun to consider? 19 May 2016 Still a physicist! Thanks, Emmy Noether Recently while browsing my FaceBook feed, I was tempted to take one of the BuzzFeed quizzes that regularly pop up. Usually, I'm immune to this kind of clickbait, not really being interested in "Which American Idol judge are you?" or "Which Game of Thrones character are you like?" (Though as a frequent traveler, I do often do the ones that ask, "How many states have you visited?" or "How many of the top 150 world travel sites have you seen?") This one asked, "Are you more of a physicist, biologist, or chemist?" This was clearly a quiz for scientists and, though I'm a lifelong physicist (practicing for 50 years), I've always been a "biology appreciator", collecting Wildlife Stamps as a boy, and reading Stephen J. Gould, E. O. Wilson, Konrad Lorenz, and lots of other as an adult. And for the past half dozen years or so, I've been holding many conversations with multiple biologists and learning some serious bio in the service of carrying out a deep reform on algebra-based physics to create an IPLS (Introductory Physics for Life Scientists) class – NEXUS/Physics. I wondered whether I had been sufficiently infected with biology memes to have gone over to the dark side. I needn't have worried. As expected, I came out "Physicist". Their description of a physicist was one I liked and that describes my favorite physicists (and I hope me too): "You’re a thinker who loves nothing more than getting stuck into a good intellectual challenge. You love to read, and you’ve got so much information (useless and otherwise) stored in your brain that everyone wants to have you on their pub quiz team. Physics suits you because it lets you spend your time contemplating some of the smallest and biggest things in the universe, and tackle some really huge questions while you’re at it." But I particularly found one item in the quiz interesting: "Select a real scientist." They offered three female scientists: Emmy Noether, Jane Goodall, and Rosalind Franklin. Although I assume that they matched Emmy to Physics, Jane to Biology, and Rosalind to Chemistry, I think of both Goodall and Franklin as biologists. I have read some of both of their work – one of Jane Goodall's books on chimpanzees (and I regularly contribute to her save the chimps foundation), and Rosalind Franklin's paper on X-ray diffraction from DNA crystals. I've never read any of Emmy Noether's original writings, but her work was introduced into my physics classes in junior year and had a powerful impact on my thinking about the world and about physics. That's what I want to talk about here. [But first, I'm inspired to make one of my typical academic digressions about a topic I've been thinking about: the structure of biological research. Reading E. O. Wilson's memoir, Naturalist, clarified for me a lot of what I have been seeing in my recent conversations with multiple biologists. I refer to this as "the Wilson/Watson abyss". About 1960, E. O. Wilson and J. D. Watson were both new Assistant Professors in the Harvard Biology Department. Over the next few years they engaged in a fierce battle for the soul of biology. What were the key issues for biology research for the next few decades? E. O., a field biologist rapidly becoming the world's greatest expert on ants, argued vigorously for a holistic approach: looking at whole animals, their behavior, how they interacted with others and their environments. J. D., fresh off his success in deciphering the structure of DNA and offering a molecular model for evolution, argued vigorously for a reductionist approach: studying the molecular mechanism of biology and the genome. The result was a split into two departments, and, essentially, a victory for Watson. Although there is excellent research in both areas, for the past half century, the strongest focus has been on microbiology and molecular models. Premier biology research institutes are often entirely focused on molecular and cellular biology and far more funding goes into that area. I personally think this is a problem and that the critical biological problems for the next half century are going to be that we HAVE to understand the systemic aspects of ecology – both for our interaction with the planet and even for medicine (through consideration of the human as an ecosystem by including our microbiome and the implications of social and environmental interactions on it). Of course this digression is inspired by the choices of Jane Goodall – a premier field biologist in the Wilson model (though she came through anthropology as a student of Louis Leakey), and of Rosalind Franklin – a premier biochemist in the Watson model (and her work was critical in allowing the Watson-Crick breakthrough). An interesting point for another post, is to note that evolution is the bridge that spans the Wilson/Watson abyss. Evolution is not a hypothesis or even really a theory, but rather a conclusion that grows out of a number of fundamental principles based strongly in observation and experiment: heredity (through DNA and its copying mechanism), variation, morphogenesis (the building of a phenotype – the individual organism – from the genomic info), and natural selection. (One might choose a different set, but this is one I like so far.) The first lies firmly on the Watson side, the last on the Wilson side. You can't make sense of evolution unless you are willing to consider both ends.] We now return to our main program. Why did I pick Emmy over Jane and Rosalind, both of whose work I have actually read and I think are immensely important? The reason is because for me as a physicist, Emmy Noether's result was a total game changer for me in the way I think about physics, the epistemology of physics, and how the world works. To state her result crudely in a way that the non-mathematician might understand, Noether's theorem says: If you have a system of interacting objects whose behavior in time is governed by a set of equations that have a symmetry, then you can find a conserved quantity. By a "symmetry", she means that you can change something about your description that doesn't change the math. By a "conserved quantity" she means something you can calculate that doesn't change as the system changes through time. (Of course Noether's theorem is a mathematical statement and there are conditions and a process to find the conserved quantity, but that requires a lot of math to elaborate. I refer you to the Wikipedia article on Noether's theorem for those who want the details. Warning: It requires knowledge of Lagrangians and Hamiltonian – junior level physics.) This is a little dense. Let's take an example or three to see just what it means. Suppose I have a set of interacting objects – something like the planets in the solar system interacting via gravity, or a set of atoms and molecules interacting via electric forces. We can describe these interactions either using forces or energy. (These approaches can be shown to be mathematically equivalent, though each tends to foreground different ways of thinking about the system.) The key is that the interactions of the objects only depend on the distances between them. This means that I can choose any coordinate system to describe the system: I can put my reference point – the 0 of my coordinates or origin – anywhere I want. Whatever origin I choose, the distance between two objects is the difference of the positions of those two objects and when you subtract their positions to get their relative distance, the position of the origin cancels. This is a symmetry. The equations that describe the motion of the system do not change depending on the position of the origin of the coordinate system. You can choose it as you like – and we typically pick an origin that will make the calculation simpler. This symmetry is called translation invariance. It means you can shift (translate) the origin freely without anything changing. But what Noether's theorem shows is the symmetry doesn't just mean we are allowed to choose the coordinate system that makes the calculation simpler, it says there is a conserved quantity and it allows you to find and calculate it. In the case of translation invariance, Noether's conserved quantity is momentum – in most cases, the product of the mass and velocity for each object. You calculate the momentum of each object in the system, add them up at one time, and for any later time you will always get the same answer, no matter how the objects have moved, even though the motions may be amazingly complicated – and may involve billions of particles! This is immensely important and has powerful practical implications. One technical example is, "How can you figure out how protons move inside a nucleus or electrons move inside an atom?" In the case of protons, you don't actually know exactly what the force law between two protons is (though there are lots of models), but we are pretty sure that they only depend on the distance between them.* But we can shoot very fast protons at a nucleus. Sometimes they will strike a proton moving in the nucleus and knock it out. If we measure the momenta of the two outgoing protons, and since we know the momentum of the incoming proton, we can infer the initial momentum of the struck proton inside the nucleus using momentum conservation. We then do a lot of these scatterings and get a probability distribution for the velocities of protons inside the nucleus.  Since we do know the force between electrons and the nucleus (the electric force), this technique is extremely powerful for studying the structure of atoms and molecules. While this seems rather technical, we'll see that there are even more important implications that providing a measurement tool for difficult to observe quantum systems. Two other fairly obvious symmetries in our description of systems are: • ·            Time translation invariance • ·            Rotational invariance The first, time translation, means that it doesn't matter when you start your clock (what time you take as 0 of time). This is true for most dynamic models in physics. Gravitational forces don't depend on time and neither do electrical ones. Since these are the two forces that dominate everything bigger than a nucleus, this symmetry holds for everything from atoms up to galaxies (where there are some as yet unsolved anomalies). Emmy's theorem says that due to the time translation symmetry there is a conserved quantity – in this case energy. The second, rotational invariance, means that it doesn't matter in which direction you point your axes. You can take the positive x direction as being towards the north star or towards the middle star of Orion's belt. (You want your coordinates to be fixed in space, not rotating with the earth or you introduce fake forces like centrifugal force and Coriolis forces.) The conserved quantity that goes with this is angular momentum, another useful principle (though more complicated to use because of more vectors). OK. That tells us what Noether's theorem tells us – about important conservation laws like (linear) momentum, energy, and angular momentum. But we learn about these in introductory physics classes without needing a sophisticated theorem. What does it add? For me, it adds something deeply epistemological – something fundamental about what we know in physics and how we know it. It shows that two very different things are tightly related: how we are allowed to describe the system at a given instant of time without changing anything (where we can choose our space and time coordinates) – a purely static statement about what kinds of forces or energies we have – and how the system moves in time – a dynamic statement about how things change. This is immensely powerful. This means that if I have created a mathematical model of a system and I find that energy is NOT conserved, I know that either I have made a mistake, or I have assumed interactions that change with time. If I find that momentum is NOT conserved, I know that I must have tied something to a fixed origin rather than to a relative coordinate between two objects. Now this isn't always wrong or bad. If I have a particle moving through a vibrating fluid I might want to treat the fluid like a fixed time dependent potential energy field. What this will mean is that the energy of my particle will not be conserved and where the energy goes (into the fluid) will not be correctly represented in this model.  A more common example is projectiles or falling bodies. Since the earth is so much larger than our projectiles we take the origin of our coordinates as a fixed point on the earth instead of taking the force as depending (as it actually does) on the distance between the center of the earth and the projectile. This means we won't see momentum conserved since we have fixed the earth. Momentum transfer to it will not be correctly represented. This might not matter depending on what we want to focus on. But what Noether's theorem shows us is that there are powerful – and absolute – links between two distinct ways of thinking about complex systems: the structure of the mathematical models we set up to describe the evolution of systems and characteristics of how those systems evolve in time. And that the result can be something as powerful and useful as a conservation law blew me away. More, that we now know exactly what characteristics of a mathematical model leads to a conservation law! There is nothing analogous to this in biology or chemistry – except as it is inherited from Noether's theorem in mathematical models biologists or chemists build or as they use energy or charge conservation. But as far as I can tell they rarely pay attention to conservation laws – even when they might do them some good. It also showed me that when you build mathematical models you occasionally hit the jackpot: you get out more than you thought you put in. Extensions of Noether's theorem to other symmetries have become a powerful tool in constructing new models of dynamics. Instead of trying to invent new force laws, we look experimentally for conservation laws, find symmetries that can give those conservation laws, and construct new dynamical models by putting together variables that fit the symmetry. This is the way much of particle physics has functioned for the past 50 years. So that question on the quiz is probably the best selector of the "physicist" category. Goodall and Franklin both did essential and pivotal work in their fields; but Noether's was a core pillar for all of 20th century physics and for me, won hands down. Thanks Emmy! 12 March 2016 Congratulations, Bernie! Congratulations, Bernie, on a surprise win in the Michigan primary! But my Bernie-phile friends: Please don't fall for the bad cognitive errors I've seen some supporters distributing in responses: binary and one-step thinking, and being misled by inapt metaphors. First, "a win-is-a-win" carries a lot of associational baggage, some of which may be true but which is certainly worth some careful analysis, but it's a binary thinking error. In Michigan, Bernie beat Hillary by 1.5% of the vote. A win, right? But in delegate count – what matters in this primary election – Hillary took 70 and Bernie 67, increasing her lead. For the primaries and for the election as a whole, one needs to keep in mind that we live in a republic, not a democracy. That means we elect a representative government, and do not directly elect a president. Winning the total popular vote is not the point (just ask Al Gore) and this is reflected in both the Democratic and Republican primaries, though in different ways. To see how this works, consider three districts of 10,000 voters. The winner of each district gets a delegate. Suppose candidate H wins two districts by 6,000 to 4,000 and candidate B wins one district by 9,000 to 1000. Candidate H gets a total of 13,000 votes, while candidate B gets a total of 17,000. A big popular vote margin for B (57% to 43%) but a win for H (2 to 1). While this feels unfair, it's a way of guaranteeing that the political process requires coalition building among diverse sub-populations. We're seeing this in Bernie and Hillary's struggle to get the votes of different ethnic groups, different age groups, and different economic classes. In a parliamentary democracy with many parties, like in many European countries, this plays out by having to build coalitions among parties. In the USA with only two parties, the coalitions are built at this stage. I don't think this is a bad thing, as I think the strength of America is our ability to (sometimes gingerly) bring together many different viewpoints, ethnic groups, and cultures, and get them to live together in reasonable harmony without frequent tribal and inter-group violence (so far). (Sorry, Black Lives Matter, I'm not trying to belittle your legitimate claims about inter-group violence in the US, only to point out that while horrible it has not reached the level of open warfare and we seem to be finally bringing it into the open enough to possibly make some positive progress.) Second, well, but "it's an unprecedented upset." This one-step thinking also carries a lot of associational baggage: it means "momentum"! Look at the derivative! That implies big change. Well, perhaps, but one learns in science that projecting derivatives is a tricky and unstable business. (See Mark Twain's quote on the growth of the Mississippi Delta.) Also, the "upset" depends on the difference between a poll and an election. An election is the event: its result is what it is (modulo errors, cheating, hanging chads, etc.) The poll is a sample that is much more akin to a measurement in physics. This plays quite well with stuff I teach in my physics class about measurement. A measurement in physics is also a sample: an attempt to determine the property of something by "tasting" it – taking a little bit in a way that you can analyze the sample and not change the object being measured. Consider a thermometer as an example. When I'm poaching a salmon for a dinner party, I put a thermometer in my salmon poacher to test the temperature and find out how hot the water is. My students often assume "a measurement is a measurement and gives a true value", but it doesn't work this way. A measurement is simply a conjoining of two physical systems. What makes it a measurement is a set of theoretical assumptions about the process of their interaction. In the thermometer case, we assume: ·            The zeroth law of thermodynamics: Energy will move between two objects in thermal contact in a direction to equalize their temperature (thermal energy density). So energy flows from a hot object into a cold until they are the same temperature. This says we expect our thermometer to extract energy from the water until it is the same temperature as the water. ·            The probe does not affect the state of the measured object significantly: The thermometer removes some energy from the water and so reduces its temperature. We assume that it only takes a little and that reduction can be neglected. If I used my big poacher thermometer in an espresso cup to see if it was too hot, the temperature the thermometer reads would not be the original temperature of the coffee but something partway between. ·            The probe has a linear response: We calibrate our thermometers by placing them in melting ice and putting a mark 0 oC and then in boiling water and placing a mark at 100 oC. The bimetal in the coil (or the liquid in the thermometer) expands as it gets hot and shifts the marker on the dial. We assume that halfway between those points is 50 oC and so on, but that isn't necessarily the case. It could expand more when it's colder and slow down when it gets hotter. Thermometers are carefully analyzed and can be trusted when used appropriately. (A similar analysis holds for voltmeters and ammeters.) But the point is: When we make a measurement it depends on theoretical assumptions about how our system is working. What does this have to do with polls? Well, a poll is a sample. A few voters are chosen to stand for the full population. The sample is too small to be chosen randomly: the error would be too large. So typically polls begin with a model of the electorates demographics: who does the voting population consist of and which of those are likely to actually vote in the election. These are often based on previous similar elections. But Michigan has not held a truly competitive Democratic primary in a long time. 2012 Obama was unopposed. In 2008, Michigan tried to slip forward in time so as to be more important, and the DNC stripped half their delegates. Many of the candidates (including Obama) refused to campaign. The two previous primaries were caucuses. So it may be that there is a tidal wave of surprise support for Bernie. But it could also be that the Michigan polls were based on crappy models. A failure of polling yes, but not representing a shift in support. The way we will tell is if somewhat similar states such as Illinois and Ohio that have had more recent contested primaries, and where primaries are held next week, also show significant underpolling for Bernie or not. I am willing to wait and see. Third, I'm afraid I'm seeing a lot of "Cinderella underdog" metaphors; the idea that somehow the election is like a basketball tournament. You just have to keep winning the popular vote. But because of the electoral college this is a terrible metaphor and leads us astray. As Democrats we want to win the presidency. To do so we need a path to 270 electoral votes and since those states are almost all (except, I think Nebraska and New Hampshire) winner take all, it takes a careful analysis of an electoral strategy; how an where to devote resources to get out the vote – and which populations to concentrate on. This is where the great detail we are getting in the Democratic primaries can help us. And it is why "national polls" of one candidate against the other are, especially this early in the game, essentially useless. Not only do these show dramatic swings as the candidates face off against each other, they don't take into account the actual election mechanism. If neither candidate gets a majority of the delegates as a result of the primaries (there are all those "superdelegates" or SDs), here's what I hope would happen. The SDs would all throw away their current commitments and turn to the Quants – the quantitative analysts who would make models of the presidential election based on various models of the electorate and the details of the primary results in the various states. There would be a spread (spray) of results – similar to what you see for paths of a hurricane – because of different assumption plus random factors. The SDs would then use their personal knowledge of their own districts to evaluate those models and make their choices. That seems to me a good reason to have SDs. Maybe I'm dreaming to hope that things would work out this way and choose the best choice for the fall election based on a detailed analysis of what we have learned from the primaries, but I'm a bit afraid that the SDs would look to support their personal interests rather than the interests of the party. I'm sure that wouldn't be true of my SDs – representatives whom I voted for and like very much. It's just all those other folks you voted for! In any case, I will actively support whoever appears to have the best likelihood of winning the actual election, based on a careful analysis of our country's complex voting problems, not based on my agreement with their program (Bernie 98% to Hillary 94%), nor on my assessment of who is likely to be a more effective president in practice (Hillary 4: Bernie 1). I am very dismayed at the direction the Republican party has been trending over the past 35 years and it seems to be getting worse and worse. (Full disclosure: I voted for Republicans in New York State Senate elections in the 1960's but have never voted for a Republican presidential candidate.) So to my Bernie-phile friends who say he can win, I say, OK, show me! I'm watching! 23 November 2015 My teaching philosophy I got my teaching position decades ago, long before anyone started to ask candidates to write a "Teaching Philosophy." I recently had to create one for an application for internal University funding. Despite having written about teaching for decades (I wrote a small book about it), I found it an interesting challenge to try to condense it all into a page-and-a-half.  For your amusement, here it is. My teaching philosophy is based on nearly 45 years of teaching students at the University of Maryland and more than 20 years of carrying out Discipline Based Education Research with students attempting to learn physics. It is also informed by my readings of the literature in education, psychology, sociology, and linguistics. My teaching philosophy grows out of a few basic principles: • It's not what the teacher does in a class that determines learning, it's what the students do. Learning is something that takes place in the student. And deep learning – sense making – involves more than just rote. It involves making meaning: making strong associations with other things that the students already know and organizing knowledge into coherent and usable structures. • I can explain for you, but I can't understand for you. Students assemble their responses to instruction from what they already know – appropriately or inappropriately. This can lead to what appear to be preconceptions that are incorrect and robust. Note, however, that these may be created “on the fly” in response to new information that is being presented. • Students' expectations matter. The expectations that students have developed about knowledge and how to learn (epistemology), based on previous experiences with schooling, are extremely important. Their answers to the questions, "What's the nature of the knowledge we are learning? [e.g., facts or productive tools?] What do I have to do to learn it? [e.g., memorize or sense-make?]" may matter as much or more than the preconceptions they bring in about content. • Science is a social activity. I'm teaching science, and science is all about how we know what we know. This is decided not by some algorithm but by a social process of sharing results, mutual evaluation, peer review, criticism, and discussion. Presenting a set of results to be repeated back is not science. Learning to do science means learning to participate in scientific conversations. These lead me to rely heavily on a number of fundamental teaching guidelines: 1.   Minds on – Look for activities that will engage the student's thinking and relevant experiences, making connections to things they know and are comfortable with. 2. Active engagement – Set up classes so that there is more for students to do, less listening. 3. Metacognition – Encourage students to be more explicit about their thinking, planning, evaluating. As a teacher, be explicit about your thinking and why you are asking them to do what you are asking them to do. 4. Enable good mistakes – Mistakes that you can learn from are "good mistakes." Set up situations where your students will learn to think about their thinking and how to debug their errors – but do it supportively with some but not too much penalty for errors. 5. Group work – Create situations where students are expected to discuss scientific ideas with their peers, both in and out of class. And finally 6.  Listen!To create the activities described above, you need to know how students are responding. Therefore, set up situations that will let you hear what students are thinking and doing. These ideas lead to my using lots of explicit techniques in class, including: having students read text and submit questions before class, asking challenging (and sometimes intentionally ambiguous) clicker questions followed by discussions of "why" and "how do we know", facilitating lots of group discussion and "find someone who disagrees with you and see if you can convince them" as part of each class session. And encouraging students to ask for regrades on quizzes and exams, and offering second-chance exams, among others.  My experience with all this leads me to three concluding overarching ideas. Diagnosis – When I first began teaching (for the first 30 years or so), if a student asked me a question, it was my instinct to answer it. In doing so I was using my experience as "the good student" and had not transitioned to being "the teacher". I had to learn that being the good student was no longer my job. My job was not necessarily to answer the student's question, but rather to consider, "Why couldn't this student answer this question for him/herself despite my having taught the material in class?" My job is in part to diagnose the students' difficulty, not answer their question. That requires a dramatically different interaction with my students. And learning when to answer a question directly (sometimes the right thing to do) is subtle. Respecting different perspectives – In the past five years, working closely with students from a different discipline than my own, I have learned that many views that seemed to me bizarre or just plain wrong, were actually well-justified in appropriate contexts. I have also learned from these same students that many of the approaches and results I took for granted and was used to teaching in my own discipline had hidden assumptions and required perspectives that were unnatural if not looked at with an expert's knowledge and the context of longer term implications and applications. Responsive teaching – Everything comes together in a fundamental overarching and unifying guideline: Listen to your students. Understand how they are interpreting and understanding (or misunderstanding) what you are teaching. Respect their views and what they bring to class, and respond by adjusting your instruction to match. This doesn't mean giving up your own view of what you want to teach or want them to learn. It means developing a good understanding of where they are and how you can help them get to where you want them to be.
b35116c238cfa4bb
Dismiss Notice Join Physics Forums Today! DeBroglie equation applied to atoms & molecules: not so obvious 1. Jun 29, 2014 #1 One of the first things about QM we were taught in my undergraduate physics program is the deBroglie relation: λ = h/p Now, it makes sense that that this might hold for all elementary particles, especially since the evidence generally seems to suggest that the commonly observed forms of matter and energy are basically made of the same stuff. However, it doesn't logically follow that the same holds true for things like atoms and molecules (or even protons). This would suggest that all the constituents somehow "know" that they are part of a larger system and adjust (and sync up) their wavelengths accordingly. The way I would expect this to be approached is to treat an atom (or maybe just start with something simpler like an electron-positron pair) as a multi-particle system, then calculate where all the individual particles would end up when shoved through a 2-slit experiment. Then the professor would say: "Notice that when you express this in terms of the center of mass of the system, you get the same equation as you would for a single particle whose mass is the sum of the parts". If you skip this step, you have an over-defined mathematical equation. So...can anyone point me to this derivation? I assume that I was not given it because it is too complicated to teach undergraduates. We already know its true from experimental evidence, but it seems like SOMEONE would have double checked the math... Dustin Soodak 2. jcsd 3. Jun 29, 2014 #2 Staff: Mentor Ideas like that are common in beginner treatments, but in fact are not correct. Since Dirac came up with the transformation theory in 1927 ideas like wave particle duality etc etc were outmoded, and seen to actually be counter productive to understanding QM - but such is not usually pointed out to start with. I think you need to see the real conceptual core: Once you understand that then it will likely be easier to see how multi particle systems are handled. Here is a correct analysis of the double slit experiment from QM principles: Its got nothing to do with wave particle duality etc. Nor does it change with composite systems. Its purely got to to with the laws of QM. Last edited: Jun 29, 2014 4. Jun 30, 2014 #3 One way to think of this: "All particles can exhibit wave properties. A particle that expands and contracts as it moves, will behave differently if it hits another particle when expanded compared to when it is contracted. In 1924, a physicist named Louis deBroglie proposed that the electron would exhibit a wave-like nature based on the electrons kinetic energy. His theory, together with the Davisson-Germer experiment done in 1927, established that an electron, accelerated by an electric field does have wave properties. Based on deBroglie's calculations, an electron, accelerated through a field of 54 volts, has a wavelength of 0.167 nanometers. Based on this wavelength, the electron takes 5.6 attoseconds to complete one expansion/contraction cycle. When these electrons are shot at a surface of nickel atoms where the spacing between atoms is a similar size, the electrons show a recoil pattern that allowed the spacing between the nickel atoms to be calculated. 5. Jun 30, 2014 #4 Staff: Mentor See the FAQ: 'So there is no duality – at least not within quantum mechanics. We still use the “duality” description of light when we try to describe light to laymen because wave and particle are behavior most people are familiar with. However, it doesn’t mean that in physics, or in the working of physicists, such a duality has any significance.' In QM it has been known for a long time quantum objects are neither particles or waves. Statements like 'All particles can exhibit wave properties' are extremely misleading. Strictly speaking it is partly true - they can exhibit wave like properties in certain situations - but there are questions such as waves of what, and the space they propagate in is an abstract Hilbert space. Also the delayed choice experiments casts serious doubt on even this view: Quantum objects are neither particle or wave - even sometimes considering them as waves is problematical - they are quantum stuff that is not amenable to such simple pictures. 6. Jul 17, 2014 #5 First of all, thanks for all the responses. The links provide a nice variety of different ways to approach this subject. In order to clarify my question, I will use one of the links: In several equations the term "p" for momentum is included. My question is how you can take a multi-particle system such as a proton, neutron, or in some cases whole atoms, and assume that this system's eventual detection can be predicted using the same equation as is used for a single elementary particle with the sum of the masses (-binding energy correction) of the particles of which it consists. It seems like the way you would approach this is to reduce the total states of a 2 particle system from (Using the notation in www.scottaaronson.com/democritus/lec9.html) to indicate that particle 1 and particle 2 must always end up in the same place (final positions are |0> and |1>) since they are physically connected to each other. This is NOT, evidently, how the world actually works. However, I think that it deserves some explanation, even if that explanation just a reference to a overcomplicated calculation of the wave function of something like an electron-positron pair. 7. Jul 19, 2014 #6 User Avatar Science Advisor That's a great question. If quantum mechanics is a theory with degress of freedom {x}, and we study a theory with emergent degrees of freedom {y}, why should the emergent theory also be quantum mechanical? I can't answer it off the top of my head in technical detail for the systems you mention, but here are some things which answer similar questions. 1) Derivation of QM with fixed number of particles from quantum field theory (which is something like QM with an infinite number of particles): http://www.damtp.cam.ac.uk/user/tong/qft/two.pdf (section 2.8.1) 2) Heuristic derivation of a low energy theory of pions from QCD in which the degrees of freedom are not pions: "The basic idea of an effective field theory is to treat the active, light particles as relevant degrees of freedom, while the heavy particles are frozen and reduced to static sources. The dynamics are described by an effective Lagrangian which is formulated in terms of the light particles and incorporates all important symmetries and symmetry-breaking patterns of the underlying fundamental theory." 3) Heuristic derivation of low energy quantum gravity from an unknown quantum theory of everything: "Let us start by asking how any quantum mechanical calculation can be reliable. Quantum perturbation theory instructs us to sum over all intermediate states of all energies. However, because physics is an experimental science, we do not know all the states that exist at high energy and we do not know what the interactions are there either. So why doesn’t this lack of knowledge get in the way of us making good predictions?" 4) The Born-Oppenheimer approximation: 5) Density functional theory: Last edited: Jul 19, 2014 8. Jul 19, 2014 #7 Previously, I offered the following view based on the following ideas of Duane (W. Duane, Proc. Natl. Acad. Science 9, 158 (1923)) and Lande (A. Lande, British Journal for the Philosophy of Science 15, 307 (1965)): the change of direction of motion of a particle in the interference experiment is determined by the momentum transferred to the screen, and this momentum corresponds to quanta (e.g. phonons) with spatial frequencies from the spatial Fourier transform of matter distribution of the screen. So I tend to make the following conclusion: when the mass of the incident particle increases, the momentum transferred to the screen remains the same, but the angle of deflection of the incident particle becomes smaller, as its momentum is greater. So the mass of the incident particle is in some sense an “external” parameter for the interference experiment. 9. Jul 19, 2014 #8 User Avatar Science Advisor Just a note to illustrate that the de Broglie relation does make sense for large molecules http://arxiv.org/abs/1009.1569. Interestingly, the abstract says "While the observation of Poisson's spot offers the advantage of non-dispersiveness and a simple distinction between classical and quantum fringes in the absence of particle wall interactions, van der Waals forces may severely limit the distinguishability between genuine quantum wave diffraction and classically explicable spots already for moderately polarizable objects and diffraction elements as thin as 100 nm. " http://arxiv.org/abs/0903.1614 also has remarks about how the internal structure may affect these measurements. "Large and thermally excited molecules often resemble small lumps of condensed matter. One consequence of this is that each individual many-body system may often be regarded as carrying along its own internal heat bath. This can determine the likelihood for exchange events between the quantum system and its environment, and thus a ect the molecular coherence properties." "Complex, floppy molecules may undergo many and very different conformational state changes even while they pass the interferometer. Several electro-magnetic properties, for instance the electric polarizability or the dipole moment, will change accordingly. This, in turn, can affect both the molecular interaction with the diffraction elements as well as their probability to couple to external perturbations." 10. Jul 20, 2014 #9 User Avatar Science Advisor I think these come pretty close to what you want. They start from the hydrogen atom, which is a system containing one proton and one electron, and derive that the centre of mass has a wave function which obeys the Schroedinger equation for a single free particle. 11. Jul 20, 2014 #10 This seems to be what I'm looking for (also: thanks for all the other interesting links everyone has posted on this thread). Now I just have to wade through enough of it to get an intuitive understanding why it should come out so neatly... 12. Jul 20, 2014 #11 User Avatar Science Advisor I'm unsure whether it comes out so neatly for everything, or whether it is only exact for the hydrogen atom. Of course, one only needs it to be approximate for more complex systems, but it would still be nice to know whether it is exact or approximate. 13. Jul 21, 2014 #12 That's exactly what I find to be so weird... The masses of the neutron and proton are put into Schrodinger's equation as if they are fundamental particles, and it works perfectly even though each is actually 3 different particles all interacting with each other via 2 different forces (one of which isn't even an inverse square). If THESE systems somehow magically work out, then there must be some general mathematical rule (maybe something to do with each particle's energy potential function being lowest in the vicinity of the other particles and the whole system being in its ground state most of the time). If there wasn't, then this would imply that there was something awfully suspicious going on. 14. Jul 21, 2014 #13 Staff: Mentor That makes zero difference to the validity of the equation ie if they are fundamental objects or not. As we have been discussing in another thread the validity of the Schroedinger equation is in fact a requirement of symmetry principles - see Ballentine - Chapter 3. In treating the hydrogen atom the key insight is that its classical Hamiltonian translates to the quantum one. This is in fact a fundamental issue in QM - that in general classical Hamiltonians do not uniquely determine quantum ones. Even our deepest formalism, the geometric approach, doesn't fully resolve that one. Like I said symmetry considerations imply the quantum Hamiltonian has exactly the same form as the classical one. The general procedure is to interchange one for the other. Why is it valid? Well there are general theorems that show for expectation values they would have to be the same - so the answer is - simplicity. We do not postulate nature to be more complex than necessary unless the simple solution fails. It doesn't - so I guess our faith in the simplicity of nature worked here. BTW if you wanted to analyse the hydrogen atom in terms of quarks etc that would mathematically involve the standard model and QFT - not QM. The mathematical rule is dead simple. The quantum Hamiltonian corresponds to the classical one. In the case of the hydrogen atom the classical Hamiltonian is very easy - we have a light electron attracted via the coulomb force to a much heaver nucleus. Exactly what holds the quarks in the nucleus together etc is irrelevant in this analysis any more than what holds electrons to the surface of objects is relevant to electrostatic experiments. Obviously something does it, and a deeper analysis will show what it is - but we do this in physics all the time - we abstract away inessentials. That's what's going on here - we abstracted away the inessential of what holds the quarks in protons together. Last edited: Jul 21, 2014 15. Jul 22, 2014 #14 User Avatar Science Advisor As above, I can't answer in detail, but these are good questions, and hopefully some of the references in post #6 will help answer them. Here are a couple more references. How do we get the proton mass from QFT? How do we get the Schroedinger equation from QFT? And in the spirit of your question, even after we get Schroedinger's equation, there are more mysteries. For example, electrons in a solid should interact by the Coulomb force. Yet band theory, which is so successful, seems to be the theory of single electrons! http://gdr-mico.cnrs.fr/UserFiles/file/Ecole/biermann_mico.pdf I suspect it doesn't work out so nicely beyond the hydrogen atom, and is just some sort of approximation. If the wave function for the centre of mass were everything, then molecules with different internal structure but the same total mass would show the same de Broglie behaviour. But they don't: http://arxiv.org/abs/1405.5021. Actually, there is a flaw in my argument, so perhaps it does work out magically even for complex molecules. Last edited: Jul 22, 2014 16. Jul 22, 2014 #15 Staff: Mentor Take the simple explanation of the orbits of planets using Newtons laws. They are not really point particles - but aggregates of particles held together by gravitational forces. But what stops all those points clumping together into a single point. Its actually Pauli's exclusion principle - but how do we know if we include that the explanation doesn't break down. Without detailed calculations we don't know. Its simply what's reasonable and if you model it this way you get pretty good correspondence with observation. Its really a fundamental issue with mathematical modelling in general - exactly what you can abstract away and what is crucial. 17. Jul 22, 2014 #16 User Avatar Science Advisor I would say that a sufficient condition for this is that all solutions of the N-body Schrödinger equation can be built from product solutions of the form ψ(R,r) = χ(R)φ(r). Here, R is the center of mass coordinate and r is a shorthand notation for N-1 other coordinates. This method is called "separation of variables" and is possible if the Hamiltonian can be written as H = HR + Hr which I think is the case at least as long as there's no external potential. /edit: So I think the basic question here is not specific to quantum mechanics. Last edited: Jul 22, 2014 18. Jul 22, 2014 #17 Staff: Mentor Neither do I. I think it's a general issue with mathematically modelling anything. What you do is get rid of the inessential - and keep what's critical. I don't think there is any general answer other than what seems reasonable and what works. The real issue here is students often don't think deeply about this stuff and sort of pick it up by osmosis so don't realize exactly what's going on. It can require a bit of thought when taken to task about it. Similar Discussions: DeBroglie equation applied to atoms & molecules: not so obvious
4ed866769b304d4b
torsdag 22 maj 2014 Mr Clay and a Meaningless Navier-Stokes Prize Problem Turbulent flow around a landing gear as non-smooth solution of the 3d incompressible Navier-Stokes equations, by CTLab KTH. Watch also turbulent flow around an airplane in landing configuration. To argue that these flows are smooth would be a meaningless abuse of mathematical language. The Clay Institute of Mathematics (CMI) founded by Landon T. Clay celebrated the new Millennium by setting up 7 Prize Problems each worth $1 million,  presented in beautiful words: • The Clay Mathematics Institute (CMI) grew out of the longstanding belief of its founder, Mr. Landon T. Clay, in the value of mathematical knowledge and its centrality to human progress, culture, and intellectual life.... • further the beauty, power and universality of mathematical thinking...deepest, most difficult problems... achievement in mathematics of historical dimension • elevate in the consciousness of the general public the fact that, in mathematics, the frontier is still open and abounds in important unsolved problems... • Problems have long been regarded as the life of mathematics.  A good problem is one that defies existing methods...whose solution promises a real advance in our knowledge.  I have long argued that since the Navier-Stokes Prize Problem is formulated without including the fundamental aspects of wellposedness and turbulence, it misses these values and thus is not a good Prize Problem. Here is my argument again: Consider the incompressible Navier-Stokes equations with viscosity $\nu >0$ in the case of (very) large Reynolds number $Re =\frac{UL}{\nu}$ with $U$ global flow speed and $L$ global length scale. Assume $U=L=1$ and thus $\nu$ (very) small. Such flows are observed physically and computationally to be turbulent with substantial velocity fluctuations $u\sim \nu^\frac{1}{4}$ on a smallest spatial scale $\epsilon\sim\nu^\frac{3}{4}$ with corresponding substantial viscous dissipation $\sim 1$.  For the jumbojet in the above simulation $Re\approx 10^8$ and the smallest scale a fraction of a millimeter. The heuristic argument to this effect goes as follows: A: Breakdown to smaller scales only takes place for sufficiently large local Reynolds number (of size 100 or more), which gives the following relation for the fluctuations $u$ on the smallest scale $\epsilon$: B: Substantial dissipation on smallest scale $\epsilon$ means  Combination of A and B gives $u\sim \nu^\frac{1}{4}$ and $\epsilon\sim\nu^\frac{3}{4}$ as stated. This can be viewed to express Lipschitz-Hölder continuity with exponent $\frac{1}{3}$ and thus that turbulent solutions for (very) small $\nu$ are non-smooth, because they are $Lip^{\frac{1}{3}}$ on (very) small scales.  The existence of such turbulent solutions can mathematically be proved by standard methods by regularization on scales much smaller than $\epsilon$, which does not change the solution but the NS equation.  For smooth data such solutions to regularized NS could formally be proved to be smooth in the sense of the formulation of the NS Prize Problem by Fefferman, but this would be in conflict with the observation that solutions are non-smooth  ($Lip^{\frac{1}{3}}$) on (very) small scales $\sim\nu^\frac{3}{4}$.  The only mathematically and physically reasonable way to resolve this conflict of definitions, would be to view turbulent solutions to be non-smooth ($Lip^{\frac{1}{3}}$ on very small scales), and thus as weak solutions, with weakly small but strongly large Euler residuals, and the aspect of wellposedness would then be of focal interest. Computational sensitivity (stability) analysis shows that turbulent weak solutions, are weakly wellposed in the sense that solution mean-values are not highly sensitive to perturbations of data (while point-values are). Stability analysis further shows that globally smooth solutions with derivatives of unit size for smooth data of unit size, are unstable and thus are not physical solutions.  The net result is that the present formulation of the NS Prize Problem is meaningless from both mathematical and physical point of view. A meaningful formulation must include wellposedness and turbulence as key issues, with existence settled by standard techniques, and a meaningful resolution would have to offer mathematical evidence of weak wellposedness and features of turbulence. I have asked Terence Tao, as a world leading mathematician working on the Prize Problem, about his views on the aspects I have brought up, and will report his response. I have earlier many times asked Fefferman the same thing but the only response I get is "To me my formulation is meaningful".  What would then Mr Clay say if he understood that the NS Prize Problem is not meaningful      outside a small group of mathematicians (which may contain just one person), when comparing to the mission to which he donated his Prize: PS It is remarkable (or deplorable) that my repeated request to start a discussion about the formulation of the Prize problem is met with complete silence from those in charge of the problem. If my view-points are silly, that could be said by those who know better. If they are not silly, maybe even relevant, then it would be silly (or deplorable) to not say anything.  In either case, silence is not reasonable and it is tiresome to keep silent under increasing pressure from the outside world to say something...  onsdag 21 maj 2014 Tao on Clay Navier-Stokes and Turbulence? Combination gives  tisdag 20 maj 2014 Answer to My Question about Formulation of Clay Navier-Stokes Prize Problem Here is the response from the Clay Mathematics Institute on my message that the formulation of the Navier-Stokes Prize Problem does not include the fundamental aspect of wellposedness required for a mathematical model of a physics phenomenon to be meaningful: Dear Dr Johnson, Thank you for your interest in the Millennium Prize Problems. Complete details can be found at As a matter of policy, the Clay Mathematics Institute does not join in discussion of the formulation of the Millennium Prize Problems, nor does it comment on potential solutions.  I am afraid that we have nothing to add to what is said on the CMI's website. Best wishes, Anne Pearsall (Mrs) Administrative Assistant Office of the President, Clay Mathematics Institute Andrew Wiles Building Radcliffe Observatory Quarter Woodstock Road Oxford OX2 6GG, UK OK, so we learn that the Administrative Assistant of the President of the Clay Mathematics Institute, not the President himself,  "is afraid that we have nothing to add" and that the Institute "does not join in discussion of the formulation of the Millennium Prize Problems".  Yes, this is indeed something to be afraid of, in particular if mr Clay himself understands that the formulation of the NS problem is unfortunate in the sense of lacking meaning to physics, and as a meaningless problem cannot have a meaningful solution.  The fact that my question about the meaningfulnness of the NS Problem in its present formulation, is met by compact silence, may be interpreted as a silent acknowledgement that the formulation indeed is meaningless, and that it is purposely so in order to reserve the problem to meaningless mathematics and guarantee that, in Newton's words, "little smatterers" are kept out. Wellposedness vs the Clay Navier-Stokes Problem? In a sequence of posts I have argued that the omission of wellposedness in the Official Description of the Clay Navier-Stokes Prize Problem by Charles Fefferman, makes the problem meaningless. To support this I quote from Wellposedness and Physical Possibility by B. Gyenis: Well posedness is widely held to be an essential feature of physical theories. Consider the following remarks of Mikhail M. Lavrentiev, Alan Rendall, and Robert M. Wald – leading experts in their respective fields of physics – intended as motivations for the continuous dependence condition: • One should remember that the main goal of solving mathematical problems is to describe certain physical processes in mathematical terms. In this case the initial data are obtained experimentally; and since measurements cannot be absolutely precise, the data contain mea- surement errors. For a mathematical model to describe a real physical process, the problem should be supplemented with some additional requirements reflecting, in a physical sense, the fact that the solution should have only small variations under slight changes of initial data or, to put it conventionally, the stability of the solution under small perturbations in the data. (Lavrentiev et al.; 2003, p. 6)  • The condition of continuity is sometimes called Cauchy stability. The reason for including it is as follows. If PDE are to be applied to model phenomena in the natural world it must be remembered that measurements are never exact but always associated with some error. As a consequence it is impossible to know initial data for a problem exactly and so if solutions depend on the initial data in an uncontrollable way the model cannot make useful predictions. Cauchy stability guarantees that this does not happen and thus represents a necessary condition for the application of PDE to the real world. (Rendall; 2008, p. 134)  • If a theory can be formulated so that “appropriate initial data” may be specified (possibly subject to constraints) such that the subsequent dynamical evolution of the system is uniquely determined, we say that the theory possesses an initial value formulation. How- ever, even if such a formulation exists, there remain further properties that a physically viable theory should satisfy. First, in an appropriate sense, “small changes” in initial data should produce only correspondingly “small changes” in the solution over any fixed compact region of spacetime. If this property were not satisfied, the theory would lose essentially all predictive power, since initial conditions can be measured only to a finite accuracy. It is generally assumed that the pathological behavior which would result from the failure of this property does not occur in physics. [...]2 (Wald; 1984, p. 224)  These remarks express a sentiment widely shared among physicists: wellposedness is a necessary condition for models to describe real physical processes. Lack of wellposedness would be pathological and it “does not occur in physics,” at least not in describing forward time propagation of physical processes. OK, so leading experts of physics consider wellposedness to a necessary requirement for a mathematical model of some physical phenomena to be meaningful. The Navier-Stokes equations is the basic model of fluid mechanics, and as such requires some form of wellposedness to be meaningful. The leading mathematical expert Charles Fefferman formulates the Clay Navier-Stokes problem without reference to wellposedness and thus apparently considers wellposedness to not be a central aspect. But doing so Fefferman separates the mathematics of Navier-Stokes equations from physics, which goes against the reason of formulating a Prize Problems about a mathematical model of fundamental importance in physics.   When I ask The Clay Institute and Fefferman to give a comment concerning these facts, I get zero response. I think my viewpoints are reasonable and essential and thus worthy of some form of answer. måndag 19 maj 2014 Wellposedness and Turbulence Not Part of Clay Navier-Stokes Problem! A central aspect of the mathematical theory of partial differential equations, such as the incompressible Navier-Stokes equations, concerns wellposedness, which is the sensitivity of solutions with respect to perturbations of data in suitable quantitative form. Without wellposedness in some form solutions have no permanence and meaning, since they can change arbitrarily subject to virtually nothing. But the Official Description of Clay Navier-Stokes Prize Problem does not include the aspect of wellposedness. A central aspect of incompressible flow described by the Navier-Stokes equations, is turbulence.  But the Official Description of the Clay Navier-Stokes Prize Problem does not include any aspect of turbulence. The Official Description is thus questionable, to say the least, from both mathematical and physical point of view, by leaving out what is fundamental. When I point this out to Charles Fefferman who has formulated the Official Description of the problem, and Luis Cafarelli who gives a video presentation thereof, and Peter Constantin who acts as referee to evaluate proposed solutions and Terence Tao who works to solve the problem and to the President of the Clay Institute, I get no reaction but silence. This is not reasonable, since the Navier-Stokes equations and the mathematics thereof belongs to us all and thus must be open to public discussion, in particular so when it has been elevated to a Millennium Prize Problem of importance to humanity. I sent to following renewed request to the people involved to reveal their cards: Dear Colleagues: I try to get a response from you concerning my questioning of the Official Description of Clay Navier-Stokes Prize Problem expressed here I get no response but compact silence. I don't think this is in the interest of a Clay Prize Problem as of concern to a wide mathematical and scientific community and not secluded to a very small closed circle. The omission of both wellposedness and turbulence in the Official Description lacks rationality from both mathematical and physical point of view, and irrationality is against the principles of mathematics and physics. I hope you can see that my questioning requires a response from you in your respective roles. Sincerely, Claes Johnson PS I raised the same question a couple of  years ago, and the only response then on my question how the Prize problem could be meaningful without including the aspect of wellposednes , was Fefferman's short reply: "It is meaningful to me". I think this answer misses that fact that science is not only a private thing. söndag 18 maj 2014 Crisis in Mathematics Education in France like in Sweden • Many people speak thereof but few do anything about it. • The phenomenon has several causes:  lördag 17 maj 2014 BodySoul Mathematical Simulation Technology Translated to Chinese Today I recieved the following letter from Zhimin Zhang (with copy to Qun Lin as a leading Chinsese applied mathematician): Dear Professor Johnson, First, I would like to apologize for taking almost 4 years to get back to you about your book. The reason was that Professor Lin wanted to understand and "digest" your book more before talking with you. To make a long story short, he likes your book very much and has organized a group of Ph.D. students to translate your book into Chinese. It is a book with more than 1600 pages and that is why it takes almost 4 years to complete. Now Professor Lin wants me to ask for your permission to publish the Chinese translation of your book. In addition, if you have updated your book, we would like to have the new version and update our translation. We look forward to your favorable response, Zhimin. I replied that I was glad to hear this and suggested to set up a formal agreement about the use of BodySoul Mathematical Simulation Technology in China. I will report what comes out of this. I recall that the book is censored at KTH, and so apparently Sweden has stricter censorship than China. The number of fresh engineering students each year is 1000 at KTH, while it is 10.000.000 in China. Almost Dictatorial Consensus in Germany An internal memo On the situation in the field of meteorology-climatology of the German Meteorological Society reveals a growing and widespread worry over the suppression of scientific views under almost dictatorial consensus: • In meteorology-climatology every one includes a highly visible army of organized, little known persons; in Germany this is almost the entire public!  • For example expressed and disseminated meteorological flaws can hardly be contained and cannot be corrected publicly at all. Yet our meteorological scientists do not speak up. • And it is hardly perceived that behind these developments – admittedly – there is also a political objective for the transformation of society, whether one wants it or not. Currently global sustainable change is the same thing. • Meteorology-climatology is playing a decisive role this political action. The – alleged – CO2 consensus here is serving as a lever within the group that consists of known colleagues who deal with climate, but also consists of a large number of climate bureaucrats coming from every imaginable social field. Together both groups consensually have introduced a binding dogma into this science (which is something that is totally alien to the notion of science). • This is not the first time such a thing has happened in the history of science. Here although this dogma came about through democratic paths (through consensus vote?), in the end it is almost dictatorial.  • Doubting the dogma is de facto forbidden and is punished? In climatology the doubt is about datasets or results taken over from hardly verifiable model simulations from other parties. Until recently this kind of science was considered conquered – thanks to our much celebrated liberty/democratic foundation! • The constant claim of consensus among so-called climatologists, who relentlessly claim man-made climate change has been established, attempts to impose by authority an end to the debate on fundamental questions.  • Thus a large number of scientist colleagues end up being ostracized, and thus could lead to the prompting of actions that would have considerable burdens on the well-intended society. Such a regulation and the resulting incalculable consequence it would have for all people would in our view – and that of many meteorological specialists we know - be irresponsible with respect to our real level of knowledge in this field. • We must desire in general, and also in our scientific field, a return to an international scientific practice that is free of pre-conceptions and cemented biased opinions.  • This must include the freedom of presenting (naturally well-founded) scientific results, even when these do not correspond to the mainstream (e.g. the IPCC requirements). The bullying of Lennart Bengtsson is a recent example of violation of scientific/democratic principles  in the name of "almost dictatorial consensus". Another is KTH-gate. Where is Western society heading? fredag 16 maj 2014 Towards Computational Solution of Clay Navier-Stokes Problem 3 The formulation of the Clay Navier-Stokes Prize problem is unfortunate, or more precisely both mathematically and physically meaningless, because the following two completely fundamental aspects are not included: 1. wellposedness 2. turbulence. To see the effect consider exterior flow with a slip boundary condition, which allows a unique stationary smooth near-solution as potential flow with a Navier-Stokes residual, which scales with the viscosity $\epsilon$. Smooth potential flow thus offers a solution to the NS equations with a vanishingly small residual under vanishingly small viscosity. But potential flow is not stable since it  under small perturbation develops into a completely different turbulent solution. In other words, potential flow is not wellposed in any sense and thus not a physical solution. The present problem formulation without 1 and 2 does not allow unphysical smooth potential flow to be distinguished from physical turbulent flow. The result is that the Clay NS problem has no meaningful solution and does not serve the purpose of a Prize problem. Note that the Clay NS problem is introduced with the following description of the essence of the problem and its importance to humanity: But turbulence is not an issue in the official formulation. The secret to unlock is turbulence, but that is not part of the problem formulation. Something is weird here. I have pointed that out to the President of Clay Mathematics Institute and will report the reaction. Here is the letter: Clay Mathematics Institute I want to convey the information that the formulation of the Clay Navier-Stokes problem is incorrect both mathematically and physically, because the fundamental aspects of (i) wellposedness and (ii) turbulence, are not included, as exposed in detail in the following sequence of blog posts: The result is that the problem cannot be given a meaningful solution and thus does not serve well as a Prize problem. Evidence is given by the fact that no progress towards a solution has been made. I have tried to engage Charles Fefferman, who has formulated the problem, Peter Constantin, who acts as a referee, and Terence Tao, who is working on the problem, into a discussion, but I get no response. I hope this way to stimulate discussion, which I think would be more constructive than no discussion. Sincerely, Claes Johnson  Towards Computational Solution of Clay Navier-Stokes Problem 2 This is a continuation of a previous post: The basic energy estimate which is easily proved analytically by multiplying the momentum equation by the velocity $u_\epsilon$ and integrating, reads for $T>0$ with $Q =\Omega\times (0,T)$: • $\int_\Omega\vert u_\epsilon (x,T)\vert^2\, dx +\int_{Q}\epsilon\vert\nabla u_\epsilon (x,t)\vert^2\, dxdt =\int_\Omega\vert u^0(x)\vert^2\, dx$ or in short notation with obvious meaning: • $U(T) + D_\epsilon (U) = U(0)$, which expresses a balance of kinetic energy $U(T)$ at time $T$ and dissipation $D_\epsilon (U)$ over the time interval $(0,T)$ summing up to initial kinetic energy $U(0)$.  Computations with small $\epsilon$ (compared to data as $\Omega$ and $U(0)$) produce turbulent solutions characterized by  •  $D_\epsilon (U) =\alpha U(0)$ where $\alpha$ is not small, that is solutions with substantial (turbulent) dissipation. For turbulent solutions $\vert \nabla u\vert$ is large, typically scaling with $\epsilon^{-\frac{1}{2}}$, even if initial data is smooth, which can be viewed as an expression of non-smoothness. The basic energy estimate can thus be used to signify non-smoothness by substantial turbulent dissipation. The Clay problem can thus be reduced to the question of proving that the dissipation term is  substantial in the basic energy estimate.  Evidence to this effect is given by computation. Analytical evidence can be given by the following argument: Smooth laminar solutions have small dissipation but smooth laminar solutions are all unstable. If the dissipation remained small it would mean that an unstable solution would remain smooth and unstable, which is not possible under perturbation.  The dissipation therefore must be substantial in the basic energy estimate and only a non-smooth solution can exist (and does exist by computation). An answer to the Clay problem may thus be possible along the following lines, assuming the viscosity is small and data are smooth: 1. Solutions exist for all time and do not cease to exist by blow-up. 2. Solutions become non-smooth (turbulent) in finite time.  3. Solutions cannot stay smooth for all time, because any smooth solution is unstable.  4. Solutions are weakly well-posed in the sense that solution mean-values are stable to perturbations, because of a cancellation effect in turbulent solutions which is not present for smooth solutions.   The group of mathematicians in charge of the problem (Fefferman, Constantin and Tao) do not answer my repeated requests to open a discussion about the formulation of the problem and possible approaches to solution. This is not helpful to progress. Mathematicians apparently want to have a heaven of their own, where they can explain phenomena which have no scientific relevance, but this is a dangerous strategy in the long run, because without connection to science funding may cease. torsdag 15 maj 2014 Lennart Bengtsson vs Royal Swedish Academy on Swedish Climate Science and Politics Lennart Bengtsson indicates that the statement from 2009 by the Royal Swedish Academy of Sciences on the Scientific Basis of Climate Change, authored mainly by himself as leading Swedish climate scientist and expressing (cautious) support of the CO2-alarmism propagated by IPCC, is due to a revision. Since 2009 LB has turned from supporter to skeptic of IPCC CO2-alarmism, which he has made very clear in media outside Sweden. The question is now if LB will participate in forming the revision or not? If the standpoint of LB as skeptic will dominate the revision, which is reasonable since he is the leading climate scientist in the Academy,  then the new statement will express skepticism to CO2-alarmism and there will be no scientific foundation for the current Swedish climate politics. If the standpoint of LB shows to be incompatible with that of the Academy, then the revision will be formed without the participation of the leading climate scientist in Sweden and then will have no weight, and the result will be the same. It seems that interesting times are awaiting the Academy and Swedish climate science and politics. For an account of the related GWPF story see Climate Depot. LBs recent article pointing to small climate sensitivity has been rejected to publication on political grounds since it question the dogma of climate alarmism. See article in The Times and Roy Spencer. Towards Computational Solution of Clay Navier-Stokes Problem 1 The Clay Navier-Stokes problem as formulated by Fefferman asks for a mathematical proof of (i) existence for smooth initial data of smooth solutions for all time to the incompressible Navier-Stokes equations, or (ii) blow-up of a solution in finite time. No progress towards an answer has been made since the problem was announced in 2000. It appears that the available tools of mathematical analysis by pen and paper are too crude to give an answer. Let me here sketch (see also earlier posts) an approach based on digital computation which may give an answer. We then consider the incompressible Navier-Stokes equations in velocity $u=u_\epsilon (t,x)$ and pressure $p=p_\epsilon (t,x)$: • $\frac{\partial u}{\partial t}+u\cdot\nabla u +\nabla p =\epsilon\Delta u$  • $\nabla\cdot u =0$ for  time $t > 0$ and $x\in\Omega$ with $\Omega$ a three-dimensional domain, subject to smooth initial data $u_\epsilon (0,x)=u^0(x)$ and and slip or no-slip boundary conditions. Here $\epsilon > 0$ is a constant viscosity, which we assume to be small compared to data ($\Omega$ and $u^0$). Computed solutions show the following dependence on $\epsilon$ under constant data: 1. $\Vert\epsilon^{\frac{1}{2}}\nabla u_\epsilon\Vert_{L_2(L_2)} \sim 1$ 2. $\Vert\epsilon\Delta u_\epsilon\Vert_{L_2(H^{-1})}\sim \epsilon^{\frac{1}{2}}$ 3. $\Vert\epsilon\Delta u_\epsilon\Vert_{L_2(L_2)}\sim \epsilon^{-\frac{1}{2}}$. Here 1 reaches the upper bound of the standard energy estimate, which can be proved analytically, which shows that $\nabla u_\epsilon$ becomes large with decreasing $\epsilon$ as a quantitative expression of non-smoothness, with 2 a variant thereof.  Also 3 expresses non-smoothness in quantitative form with $\epsilon\Delta u_\epsilon$ being small in a weak norm but large in a strong norm. Computation thus is observed to produce solutions to the Navier-Stokes equations with increasing degree of non-smoothness as $\epsilon$ tends to zero, which can be seen as an answer to the Clay question in direction of  (ii) but not quite since the solution does not cease to exist by "blow up" and continues as a non-smooth weak solution. Computed solutions satisfying 3 are turbulent. Mean-value outputs of turbulent solutions show small variation as the viscosity becomes small, in particular with slip. This can be seen to express weak well-posedness under variation of small viscosity, which may allow to carry the conclusion from computationally resolvable small viscosity to vanishingly small viscosity beyond computation. We may compare with the attempt by Terence Tao to construct a non-smooth solution by pen and paper in a thought experiment, where the computation is left to the reader of a 70 page dense "computer program" expressed in analytical mathematics. We let instead the computer compute the solution following a standard (freely accessible) computer program, which allows the reader to do the same and then inspect the solution and verify 3 and thus get an answer to the Clay problem.      onsdag 14 maj 2014 Shocking Message from Lennart Bengtsson Muted by Climate Alarmists Die Klimazwiebel publishes the following shocking letter from Lennart Bengtsson forced to resign from the advisory board of GWPF under group pressure from politically correct CO2 alarmists: I have recently communicated with LB and expressed my great admiration for his courageous questioning of CO2 alarmism in media, because in his view the scientific reason is lacking. He then said that he would continue to fight for scientific truth following his responsibility as leading scientist. But LB as has now been muted by naked power and the order of climate alarmism is re-established. What a terribly sad story this is! For Sweden, Science and the World! See also Climate Depot and Tallbloke and Bishophill and JoNova and Climate Audit and the reaction from David Henderson, Chairman, GWPF’s Academic Advisory Council: • With great regret, and all good wishes for the future. No wonder that the reaction is so strong: Big values are at stake. The whole alarmist ship is sinking and desperation is spreading…One day not too far away LB will be glorified as a scientist ready to folllow his conviction, now only temporarily overpowered…., no matter what the cost may be… PS1 As noted by Lubos the event my drive LB into a true skeptic position, rather than back to alarmism, a position now taken by many scientists and thus if not maximally comfortable probably livable. PS2 More on the Swedish Klimatupplysningen and Antropocene and MSM outside Sweden: Towards a Solution of the Clay Navier-Stokes Problem 2 The Clay Millennium Navier-Stokes problem concerns properties of solutions of the incompressible Navier-Stokes equations as the basic model of fluid mechanics of fundamental importance in both science and mathematics. The Official Formulation Description by Charles Fefferman poses the following alternatives: 1. Existence of smooth solutions for all time from smooth initial data? 2. Cease of existence ("break-down" or "blow-up") of a solution from smooth initial data?    No progress towards a solution has been made since the formulation in 2000. Existence of smooth solutions for all time seems impossible since the viscosity term is not strong enough. All efforts to construct a solution with blow-up have failed because the viscosity term is too strong. No answer thus seems to be possible and a scientific dead-lock is reached. Over the years I have, without success, tried to convey the message that the reason for the dead-lock is  that Fefferman's problem formulation is both mathematically and physically meaningless, because the fundamental aspect of (Hadamard) wellposedness or stability of solutions to perturbations is not included.  Including well-posedness leads to the following possible answer which is neither 1 nor 2 and which deals with case of small viscosity (compared to initial data): • Turbulent solutions always develop in finite time from smooth initial data. • A turbulent (non-smooth) solution is characterized by having a Navier-Stokes residual which is small in a weak $H^{-1}$-norm and large in a strong $H^1$-norm. • Turbulent solutions are weakly wellposed by having stable mean-value outputs.  I have tried to get some comment from Terence Tao, Charles Fefferman and Peter Constantin, who are in charge of the problem formulation and serve as referees to evaluate proposed solutions. The response I get is that the problem formulation without wellposedness by Fefferman is fine as a mathematical problem, even if it does not make sense from physics point of view. The response is that it may well be that a solution will never be reached, but if so let it be.  But why not include wellposedness and make the Clay Navier-Stokes problem meaningful from a physics point of view and then meaningful as a challenge to development of mathematics? Why not open to possibility instead of impossibility? Why spend major efforts on a meaningless question without answer?  I pose this question to Fefferman, Constantin and Tao, with the hope of getting some response, to be reported. PS1 We may compare with the lack of global warming since 2000: No progress of the temperature whatsoever. With this evidence one may ask if there may be some fundamental flaw in the idea of global warming. PS2 Terence Tao sets out to "construct" a selfreplicating solution of the Navier-Stokes equations which "blows up" in a 70 page paper and pen excercise, which shows to be impossible. We let instead the computer construct solutions, which turns out to be possible, and we observe that the constructed solutions become turbulent and thus show a form of blow-up. PS3 It does not seem that Fefferman et al are interested in communicating outside their own group and so they respond by silence, whatever it means.  Is this a sign of healthy strong science, which Mr. Clay presumably would prefer to support? The consequences are far reaching: If the Clay problem formulation is wrong, then something bigger is wrong.     tisdag 13 maj 2014 Parameter-Free Fluid Models: How to Make Einstein Happy Towards Solution of the Clay Navier-Stokes Problem?               Watch movie of turbulent flow as solution of the Navier-Stokes equations.  Quanta Magazine reports in A Fluid New Path in Grand Math Challenge (Febr 24): • Tao’s proposal is “a tall order,” said Charles Fefferman of Princeton University. We read that the Grand Math Challenge of the Clay Navier-Stokes Problem is taken on by one of the world's sharpest mathematicians with the plan to construct a solution with smooth initial data which "blows up" in finite time, thus giving a negative answer to the Clay problem.  Tao thus seeks to construct a "fluid computer" capable of answering a mathematical question concerning the Navier-Stokes equations. Let us compare with our own approach to the Clay problem based on using a digital computer to solve the Navier-Stokes equations computationally, which offers the following answer for the case of small viscosity as presented in New Theory of Flight (see also blogpost): 1. Computations produce from smooth initial data functions with Navier-Stokes residuals small in $H^{-1}$ and large in $H^1$, which are non-smooth solutions showing to have stable mean-value outputs and thus represent physical turbulent states.   2. Smooth solutions are unstable and thus do not represent physical states.        In this analysis the aspect of stability is fundamental as identified by Hadamard as well-posedness. Unfortunately, the Clay problem formulation does not include the aspect of well-posedness, and thus is meaningless. Including well-posedness gives a new Clay problem, which can be answered in a meaningful way and this is what we seek to do. Computations thus produce non-smooth approximate solutions which are well-posed in mean-value sense and thus physical solutions, while smooth solutions show to be unstable and thus are not physical solutions. Our answer is different from Tao's in that computed solutions initiated from smooth initial data do not "blow up" but instead turn turbulent with residuals becoming large in $H^1$ but with stable mean-value outputs. I have asked Tao for a comment to the message of this post and will report. More on the Clay problem here and here. PS1 The fact that there has been no advance towards a solution of the Clay problem as formulated by Charles Fefferman in 2000, without reference to well-posedness, can be seen as evidence that the Clay question is ill posed and thus cannot be answered. The problem thus requires reformulation but mathematicians in charge of the problem formulation do not seem to be open to such a thing. Hadamard's 1933 paper on the necessity of well-posedned seems to be forgotten. Strange. Very strange. The Navier-Stokes solution does not "blow up" but becomes non-smooth (turbulent), but this is not contained in the present formulation. PS2 Quanta reports: Amazing: It is observed that the ocean does not blow up spontaneously, but ocean motion is partly turbulent and thus is not smooth and well-behaved and thus falls outside the allowed categories in the Clay problem, as either staying smooth or blowing up. No wonder that the problem as formulated has no solution. See also following post. måndag 12 maj 2014 Modern Physics as a Mess Alexander Unzicker presents in Higgs Fake a relentless criticism of modern physics also presented on Youtube here and here. Take a look, and think yourself! Korrespondens med Lennart Bengtsson (som lovar att fightas för vetenskapen) Brev från mig till Lennart Bengtsson 11/5: Hej Lennart Som Du förmodligen noterat har jag upprepade kommentarer till Dina inlägg i media uppmanat Dig att verka för att KVAs klimatuttalande skrivs om från politiskt korrekt stöd av IPCC till korrekt vetenskaplig analys av dogmen om koldioxidalarm. Du har inte svarat på mina kommentarer men jag hoppas att Du vill svara på detta direkta mail och tala om hur Du ser på KVAs uttalande och om Du anser att det nu måste skrivas om. KVAs uttalande ligger till grund för svensk klimatpolitik och Du har som författare och ledande vetenskapsman ett stort ansvar att bära. Vänliga hälsningar, Claes Svar från LB: Jag kan bara meddela Dig att just har blivit allvarligt kritiserad av en akademikollega att KVAs klimatyttrande var urvattnat och det var jag som var skuld till detta. Samtidigt anklagar Du mig för att jag skrivit ett alarmistiskt yttrande. Detta går väl knappast ihop? Jag förslår att Du kontaktar professor Olle Häggström så får Ni komma fram till en ny formulering som Ni båda kan stödja. KVA kommer i alla händelser skriva om yttrandet med detta blir knappast med min medverkan. Mitt svar till LB: Tack för svar Lennart.Varför kommer Du inte att deltaga i författandet av KVAs nya klimatuttalande, som Du säger är på gång? Är Du sågad eller väljer Du självmant att överlåta ansvaret till personer som vet mindre än Du? Hur bär Du i så fall Ditt ansvar som vetenskapsman? Svar från LB: Jag var tillsammans med flera medlemmar i akademins 5e klass ansvarig för det yttrande som blev klart i september 2009 och som sedan godkändes av KVA med två reservationer som jag kan erinra mig. Olle Häggström var inte en av dom så vitt jag vet utan han var i stort sett positivt. Han konsulterades under arbetets gång vid ngt tillfälle. Om nu akademien väljer att författa ett nytt yttrande så är kanske inte jag den rätte personen att göra detta efter alla personliga attacker jag utsatts för. KVA vill kanske ha en mindre kontroversiell person än jag som leder detta och som också är mer i linje med den uppfattning som föredras från politiskt håll. Jag kan förstå detta men delar inte en sådan uppfattning. Mitt ansvar som vetenskapsman är en personlig fråga och den kommer jag självklart att behålla. Att jag därför som nu utsätts och säkert kommer att utsättas för alla slag av kritik från både "vänster" och "höger" får jag naturligtvis leva med. Huruvida kritiken är befogat eller inte är det knappast för mig att avgöra. Här är det min uppfattning att Du delar den kritiska uppfattningen med Olle Häggström. Mitt svar till LB: Jag anser att Du har ett vidare ansvar än bara personligt för att KVAs nya yttrande kommer att baseras på vetenskap och inte politik. Du har modigt i media framfört Din övertygelse som vetenskapsman, och det är beundransvärt. Jag hoppas att Du nu inte viker Dig på grund av illasinnade påhopp utan gör det Du kan för att KVAs nya uttalande blir ett uttalande värdigt en vetenskaplig akademi och inte en ny soppa av politisk korrekthet. Kan Sverige räkna med detta? PS Var vänlig och bunta inte ihop mig med OH, som gillar mig lika lite som Dig. Jag delar väsentligen Din uppfattning såsom varande en vetenskapligt baserad hållning, så långt vetenskapen nu har nått. Jag hoppas bara att Du fortsätter att hävda Dina insikter. Min enda kritik skulle uppkomma om Du avstår från att göra det. Svar från LB: Tack för Dina uppmuntrande ord. Jag lovar att fight back... Why Feynman Said: Nobody Understands Quantum Mechanics The (trivial) commutator relation   • $px - xp = ih$,  where $x$ is the position (operator) and $p=\frac{h}{i}\frac{\partial}{\partial x}$ is the momentum (operator), is supposed to play a fundamental role in quantum mechanics,  in particular as the origin of Heisenberg's Uncertainty Principle: • $\sigma_x\sigma_p\ge \frac{h}{2}$, where $\sigma_x$ is the standard deviation in measurements of position $x$, and $\sigma_p$ that of momentum.  We see that both the commutator relation and Heisenberg's Uncertainty Principle concern the product of position and momentum. But such a product lacks physical meaning. Momentum $p$ has physical meaning and so has position $x$, but their product has no physical meaning.   Momentum multiplied by velocity has a physical meaning as kinetic energy, but momentum multiplied by position does not. Force multiplied by velocity has a meaning as work. Quantum mechanics is however obsessed with the product of momentum and position, with the message that because of the commutator relation they cannot both be determined at the same time and spot. The message is that this makes quantum mechanics fundamentally different from classical mechanics where supposedly momentum and position can both be determined. There are two approaches to physics: 1. Make it as simple and understandable as possible.  2. Make it as complicated and mysterioud as possible.      Quantum mechanics has developed according to 2 as evidenced by Richard Feynman: One reason is that the product of momentum and position is given an fundamental role in contradiction to the fact that it has no physical meaning.  söndag 11 maj 2014 How to Win Any Debate: Claim You Understand Entropy! John von Neumann (1903-1957) was a very clever mathematician who offered the following advice: • No one really knows what entropy really is, so in a debate you will always have the advantage (by pretending that you know). This is still true, and causes a lot of confusion. If you want to improve your understanding then you could consult Computational Thermodynamics, which presents the 2nd Law of Thermodynamics resulting from the Euler equations for a compressible gas subject to finite precision computation in the following integrated form, with the dot signifying time differentiation (see the previous post): • $\dot K+\dot P = W-D$ • $\dot E = -W + D$,   where $K$ is kinetic energy, $P$ potential energy, $W$ work, $E$ heat energy and $D\ge 0$ is turbulent dissipation with $W > 0$ under expansion and $W < 0$ under compression. The sign of $D$ sets the direction of time with always transfer of energy from $K+P$ to $E$ from turbulent dissipation. Here turbulent dissipation is the same as entropy production or the other way around: • Entropy production is the same as turbulent dissipation.  This removes the mystery from entropy and you can now win any debate, by really knowing what entropy is!  lördag 10 maj 2014 Lennart Bengtsson om Bränning av Böcker Lennart Bengtsson kommenterar en debattartikel i dagens DN: • Med det stora antalet akademiker bland undertecknarna kunde man kanske väntat sig lite mer kritiskt och öppet tänkande och inte bara detta sagolika flum. Att världen idag är beroende av fossil energi till mer än 80% och med 1.4 miljarder människor som saknar tillgång till el och där halva jordens befolkning är underförsörjda med energi verkar knappast bekymra denna ljusets riddarvakt det ringaste. • Nästa steg blir väl att bannlysa det felaktiga tänkandet eller bannlysa eller rent av bränna olämpliga böcker som den framstående belgiske energiexperten Samuele Furfaris nyutkomna bok: ”Vive les énergies fossiles!” med undertiteln ”La contre-révolution énergétique” Det enda hoppfulla är väl att dessa untertecknare eller snarare deras klimatstridande studenter inte normalt läser böcker på franska. I slutstadiet får vi räkna med att även diverse olämpliga personer blir bannlysta i denna nysvenska omvända upplysningstid. Mot detta står att LB deltog i det offentliga brännandet på KTH de 4e december 2010 (post with 4051 page views) av min matte-bok, eftersom matematiken hos enkla klimatmodeller ifrågasatte den då (och ännu) rådande dogmen av CO2-alarmism.   Kan vi läsa LBs kommentar som ett uttryck för att LB inte skulle göra om samma sak idag? Skulle man kunna säga att bokbränning inte är bra eftersom det leder till ökade CO2 utsläpp? Strange Laws by Strangest Man: Dirac                Paul Dirac, The Strangest Man, who conjured (strange) laws of nature from pure thought. Paul Dirac coined in 1926 the name fermion after Enrico Fermi as an elementary particle with antisymmetric wave-function $\psi (x_1,…,x_N)$ as a function of $N$ three-dimensional space variables $x_1,…,x_N$, and a boson after Satyendra Nath Bose to have a symmetric wave-function. Dirac conjectured that Nature is so constructed that only wave-functions which are either anti-symmetric or symmetric can occur, but could not give a reason other than mathematical beauty. Dirac was encouraged by the property of an antisymmetric wave function to change sign under permutation of two particles, which forbids two particles to be at the same spot (assuming the same spin), which he happily recognized as Pauli's exclusion principle. Since then it has become an incontrovertible fact impossible to question that Nature only accepts either anti-symmetric or symmetric wave-functions, but no underlying reason has ever been presented, other than mathematical beauty (for people who rightly can admire such a thing). But if there it has no physical reason, Dirac's conjecture may be wrong. The first evidence to this effect is that the wave-function for Helium appears to be neither symmetric nor anti-symmetric as representing a configuration with the two electrons separated into two opposite half spheres.  If Dirac's conjecture is wrong for $N=2$, it may well be wrong also for $N>2$ and then standard quantum mechanics collapses… Basic Atmospheric Thermodynamics as 2nd Law The debate on the temperature distribution in the atmosphere is going around in never-ending circulation just like the air in the atmosphere. Let us here recall the basic statements of my chapter Climate Thermodynamics in a famous book, which is condensed as the 2nd law of thermodynamics expressed in the following form with the dot signifying time differentiation: • $\dot E = -W + D$,   There are two basic temperature distributions with linear decrease with height as lapse rate (assuming zero heat conductivity):  • Isothermal atmosphere with zero lapse rate: $D$ maximal with $W=D$. • Maximal (dry adiabatic) lapse rate $=9.8\, C/km$ with $D=0$ minimal. The observed lapse rate (of about 6.5 C/km) is somewhere between maximal and minimal. We note: 1. Lapse rate may increase by slow laminar vertical circulation with ascending air cooling and descending air warming with $D=0$. 2. Lapse rate may decrease by turbulent dissipation $D>0$ heating upper layers. 3. A (partially) transparent atmosphere (like on Earth) heated from below may naturally develop a positive lapse rate by 1.  4. An opaque atmosphere (like on Venus) heated from above may become isothermal by heat conduction and may then develop a positive lapse rate by 1.   The lapse rate is basic to planetary climate since it determines the surface temperature from the temperature at the top of the troposphere, and its dependence on the radiative properties of the atmosphere is a key question in global climate science. Compare with the previous post Lapse Rate by Gravitation: Loschmidt or Boltzmann/Maxwell? fredag 9 maj 2014 Why Insist on Quantum Mechanics Based on Magic and Contradiction? The ground state of Helium is postulated to be $1s^2$ with two overlaying electrons with opposite spin and identical spherically symmetric spatial wave-functions in the first shell, which is not the ground state because its energy is too large. This is the starting point for the Schrödinger equation for many-electron atoms. Here is a further motivation why it may be of interest to consider wave-functions for an atom with $N$ electrons as a sum of $N$ functions $\psi_1(x)$,…,$\psi_N(x)$, all depending on a common three-dimensional space coordinate $x$ (plus time)  as suggested in a previous post: • $\psi (x)=\psi_1(x)+\psi_2(x)+…+\psi_N(x)$. We recall that Schrödinger's equation for the Hydrogen atom as the basis of quantum mechanics, takes the form: • $ih\frac{\partial\psi}{\partial t}=-\frac{h^2}{2m}\Delta\psi +V\psi$ for all $x$ and $t>0$, with kernel potential $V(x)=-\frac{1}{\vert x\vert}$, $x$ a three-dimensional space coordinate, $t>0$ time, $h$ Planck's constant, $m$ the mass of an electron and corresponding one-electron wave-function $\psi (x,t)$ as solution. This equation is magically pulled out of a hat from the relation • $E =\frac{p^2}{2m} + V(x)$ expressing conservation of energy $E$ of a body of mass $m$ with position $x(t)$ moving in a potential $V(x)$ with momentum $p=m\frac{dx}{dt}$, by the following formal substitutions: • $E\rightarrow ih\frac{\partial}{\partial t}$, • $p\rightarrow\frac{h}{i}\nabla$, followed by formal multiplication by $\psi$. Energy conservation for the Hydrogen atom then takes the form: • $E=K(t)+P(t)$ for all $t>0$, where • $K(t) =\frac{h^2}{2m}\int\vert\nabla\psi (x,t)\vert^2\, dx$ is the kinetic energy,  • $P(t)=\int \frac{\vert\psi (x, t)\vert^2}{\vert x\vert}dx$ is the potential energy    of the electron, under the normalization • $\int\vert\psi (x,t)\vert^2\, dx=1$. So far so good: The different energy levels $E$ of time-periodic solutions to Schrödinger's equation give the observed spectrum of the Hydrogen atom with corresponding wave-functions describing the distribution of the electron around the kernel. We see that the Laplace term gives rise to the kinetic energy as an effect of gradient regularization.   But consider now the accepted standard text-book generalization of Schrödinger's equation to an atom with $N$ electrons: • $ih\frac{\partial\psi}{\partial t}=-\sum_{j=1}^N(\frac{h^2}{2m}\Delta_j -\frac{N}{\vert x_j\vert})\psi + \sum_{k < j}\frac{1}{\vert x_j-x_k\vert}\psi$,   where $\psi (x_1,…,x_N,t)$ depends on $N$ three-dimensional space coordinates $x_1,…, x_N$ and time $t$, and $\Delta_j$ is the Laplace operator with respect to coordinate $x_j$, under the normalization • $\int\vert\psi\vert^2\, dx_1….dx_N=1$. We see the appearance of the one-electron operators with corresponding one-electron kinetic energies: • $K_j(t) =\frac{h^2}{2m}\int\vert\nabla_j\psi\vert^2\, dx_1…dx_N$,  and electron-electron repulsion expressed by the coupling potential • $\sum_{k < j}\frac{1}{\vert x_j-x_k\vert}$. We see that in this model each electron $j$ is equipped with its own three-dimensional space with coordinate $x_j$ and its own kinetic energy $K_j$, with interaction between the electrons only through the coupling potential. The electron individuality and high dimensionality of the wave function $\psi (x_1,…x_N)$ is reduced by restriction to wave functions as products $\psi_1(x_1)…\psi (x_N)$ built from three-dimensional wave functions $\psi_1,…,\psi_N$ combined with symmetry or antisymmetry under permutations of the coordinates $x_1,…,x_N$, which eliminates all individuality of the electrons. Extreme electron individuality is thus countered by permutations removing all individuality, but the individual one-electron kinetic energies $K_j$ are kept as if each electron keeps its individuality. This is strange. To see the result, recall that the ground state of minimal energy of Helium with two electrons is supposed to be given by a symmetric wave function $\psi (x_1,x_2)$ • $\psi (x_1,x_2)=\phi (x_1)\phi (x_2)$,       where $\phi (x_1)\sim \exp(-2\vert x_1\vert )$ is spherically symmetric, the same for both electrons. The two electrons of the ground state of Helium are thus supposed to have identical spherically symmetric distributions denoted $1s^2$, see the periodic table above. The trouble is now that this configuration has energy (in Hartree units) $- 2.75$ while the observed energy is $-2.903$. The true ground state is thus different from $1s^2$ and to handle this situation, while insisting that ground state still is $1s^2$ as in the table above, a so-called corrective perturbation is made introducing a dependence of $\psi (x_1,x_2)$ on $\vert x_1-x_2\vert$ in a Rayleigh-Ritz minimization procedure. This way a better correspondence with observation is reached, because separation of the electrons is now possible: If one electron is on one side of the kernel then the other electron is on the other side. But the standard message is contradictory: • The ground state configuration for Helium is $1s^2$, which however is not the ground state because its energy is too large ($-2.75$ instead of $-2.903$). • Smaller energy can be obtained by a perturbation computation but the corresponding electron configuration is hidden to readers of physics books, because the ground state is still postulated to be $1s^2$.      If we minimize energy over wave functions of product form • $\psi (x_1,x_2)=\psi_1(x_1)\psi_2(x_2)$,  without asking for symmetry, we find that the minimum is achieved with spherically symmetric  $\psi_1=\psi_2$, with too large energy as just noted. However, if we instead compute the kinetic energy based on the sum with common space space coordinate $x$ • $\psi_1(x) +\psi_2(x)$  as suggested in the previous post, then separation of the electrons is advantageous allowing discontinuous electron distributions (joining smoothly) without cost of kinetic energy and better correspondence with observation is achieved. • The standard attribution of individual kinetic energy appears to make the individual electron distributions "too stiff" and thus favors overlaying electrons rather than separated electrons, requiring Pauli's exclusion principle to prevent overlaying of more than two electrons.  • If kinetic energy is instead computed from the sum of individual electron distributions, electron "stiffness" is reduced and separation favored.  • Since the standard individual one-electron attribution of kinetic energy is ad hoc,  there is little  reason to insist that kinetic energy must be computed this way, in particular when it leads to an incorrect ground state already for Helium.  • Attributing kinetic energy to a sum of electron wave-functions allows discontinuous electron distributions joining smoothly without cost of kinetic energy. Electron individuality is here kept as individual distribution in space, while kinetic energy is collectively computed from the assembly.  This would be the way to handle individuality in a collective macroscopic setting and there is no reason why this would not be operational also for microscopics. • Since the stated ground state as $1s^2$ for Helium is incorrect, there is no reason to believe that any of the other ground states listed in the standard periodic table is correct.  • If so, then the claim that the standard Schrödinger's equation explains the periodic table has little reason. PS1 The standard argument is that the standard multi-d Schrödinger equation must be correct since there is no case known for which the multi-d wave-function solution does not agree exactly with what is observed! But this is not a correct argument, because (i) the multi-d Schrödinger equation cannot be solved, (ii) even if the wave-function could be determined its physical meaning is unclear and so comparison with reality is impossible. The standard argument is to turn (i) and (ii) from scientific disaster into monumental success by claiming that since the wave-function is impossible to determine, there is no way to prove that it is not correct. Realizing that arguing this way does not follow basic scientific principle may open to searching for different forms of Schrödinger's equation, as non-linear systems of equations in three space dimensions instead of linear multi-d scalar equations, which are computable and have physical meaning, as suggested. PS2 The standard way to handle that the standard linear multi-d Schrödinger equation is uncomputable is using Density Functional Theory (DFT) awarded the 1998 Nobel Prize in Chemistry, as a non-linear 3d scalar system in electron density. DFT results from averaging in the standard linear multi-d Schrödinger equation producing exchange correlation potentials which are impossible to determine. If the standard multi-d linear Schrödinger equation is questionable, then so is DFT.         torsdag 8 maj 2014 Quantum Statistics as Salvation from Catastrophe? Planck awarding the Planck Medal to Einstein in 1929 for his elaboration of Planck's idea of discrete of quanta of energy into quanta of light, an idea which Planck viewed as a "hypothetical attempt" resulting from an "act of desperation". To understand a theory of physics it is helpful to seek the reason the theory was developed. In The Conceptual Development of Quantum Mechanics by Max Jammer we read: • Quantum theory had its origin in the inability of classical physics to account for the experimentally observed distribution in the continuous spectrum of black-body. • It is convenient to define the first phase in the development of quantum theory the period in which all quantum conceptions and principles proposed referred exclusively to black-body radiation or harmonic vibrations.   • …the study of the single physical phenomenon of blackbody radiation led to the conceptions of quanta and to quantum statistics of the harmonic oscillator, and thus to results which defied the principles of classical mechanics and, in particular, the equipartition theorem. • It was generally agreed that classical physics was incapable of accounting for atomic and molecular processes. • Planck obviously regarded the use of the law of chance… merely as a provisional device… in his own opinion his new theory was but a "hypothetical attempt" to reconcile the law of radiation with foundations of Maxwell's doctrine, and not a final solution to the problem. Quantum mechanics thus developed from Planck's hypothetical attempt to save Wien's classical radiation law with radiance of frequency $\nu$ scaling like $T\nu^2$ with $T$ temperature, from an ultraviolet catastrophe with the radiance apparently tending to infinity without any bound on the frequency $\nu$. To save the world from this catastrophe, Planck against his basic convictions as scientist seeing no way other out, then gave up causality as the essence of science by corrupting his deterministic harmonic oscillators by statistics. And on this shaky ground quantum mechanics was formed. No wonder that quantum mechanics in its present form is a catastrophe (with uncomputable wave-functions without physical meaning), although depicted as an imposing intellectual structure of great beauty.  But can statistics really save us from catastrophe? Catastrophe may be the result an unfortunate throw of dice by fate, but you don't avoid a catastrophe by letting dice throw decide how to steer your car.   Computational Blackbody Radiation describes a different way of avoiding the ultraviolet catastrophe with statistics replaced by a constructive version of classical mechanics based on finite precision computation. From this starting point a quantum mechanics without statistics may be possible to formulate. If so the present catastrophe of quantum mechanics can (perhaps) be avoided. onsdag 7 maj 2014 Is Blackbody Radiation Universal? In a recent series of articles Pierre-Marie Robitaille questions the idea of universality of blackbody radiation. Let us see what the analysis of the model studied at Computational Blackbody Radiation can say. The model consists of a wave equation for a vibrating atomic lattice augmented with small damping modeling outgoing radiation. The model is characterized by a lattice temperature $T$ assumed to be the same for all frequencies $\nu$ and a radiative damping coefficient $\gamma$ with corresponding radiance $R(\nu ,T)$ depending on frequency and temperature according to Planck's law (with simplified high-frequency cut-off): • $R(\nu ,T)=\gamma T\nu^2$ for $\nu\leq\frac{T}{h}$, • $R(\nu ,T)=0$ for $\nu > \frac{T}{h}$, where the parameter $h$ defines the high-frequency cut-off.  The model will subject to frequency dependent forcing $f_\nu$ reach equilibrium with incoming = outgoing radiation: • $R(\nu ,T) =\epsilon f_\nu^2$ for $\nu\leq\frac{T}{h}$, • assuming for simplicity that frequencies $\nu >\frac{T}{h}$ are reflected, where $\epsilon\le 1$ is a coefficient of absorptivity = emissivity. The radiative qualities of a lattice can thus be described by the coefficients $\gamma$, $\epsilon$ and $h$ and the temperature scale $T$.  Assume now that we have two lattices 1 and 2 with different characteristics $(\gamma_1,\epsilon_1, h_1, T_1)$ and $(\gamma_2,\epsilon_2, h_2, T_2)$ which are brought into radiative equilibrium. We will then have  • $\epsilon_1\gamma_1T_1\nu^2 = \epsilon_2\gamma_2T_2\nu^2$ for $\nu\leq\frac{T_2}{h_2}$  • assuming $\frac{T_2}{h_2}\leq \frac{T_1}{h_1}$  • and for simplicity that 2 reflects frequencies $\nu > \frac{T_2}{h_2}$.     If we choose lattice 1 as reference, to serve as an ideal reference blackbody, defining a reference temperature scale $T_1$, we can then calibrate the temperature scale $T_2$ for lattice 2 so that  • $\epsilon_1\gamma_1T_1= \epsilon_2\gamma_2T_2$, thus effectively assigning temperature $T_1$ to lattice 2 by radiative equilibrium with lattice 1 acting as ideal blackbody, thus effectively using 1 as a reference thermometer, assuming it has maximal cut-off. Any lattice 2 will then mimic the radiation of lattice 1 in radiative equilibrium and a form of universality will be achieved.  In practice lattice 1 is represented by a small piece of graphite inside a cavity with walls represented by lattice 2 with the effect that the cavity will radiate like graphite independent of its form or wall material. Universality will thus be reached by mimicing of a reference, viewed as an ideal blackbody, which is perfectly understandable, and not by some mysterious deep inherent quality of blackbody radiation. Without the piece of graphite the cavity will possibly radiate with different characteristics and universality may be lost. The analysis indicates that the critical quality of the reference blackbody is maximal cut-off and equal temperature of all frequencies, and not maximal absorptivity = emissivity = 1, since the effective parameter is the product $\epsilon\gamma$.  • All dancers which mimic Fred Astaire, dance like Fred Astaire, but all dancers do not dance like Fred Astaire.      söndag 4 maj 2014 A Three-dimensional Multi-Electron Wave Function with associated energy as the sum of kinetic energy, attractive kernel potential energy and repulsive interelectron energy: • $E(\psi )= \frac{1}{2}\int\vert\nabla\psi\vert^2dx - \int\frac{N\psi^2}{\vert x\vert}dx+\sum_{j\neq k}\int\int\frac{\psi_j^2(x)\psi_k^2(y)}{2\vert x-y\vert}dxdy$, under the normalization • $\int\psi_j^2dx =1$ for $j=1,...,N$, where $\psi_j(x)$ represents the distribution of electron $j$. The ground state is determined as the state of minimal energy determined as the solution of a non-linear system of equations in three space dimensions expressing minimality.  We see that minimization favors atomistic wavefunctions $\psi (x)=\sum_j\psi_j(x)$ built from electronic wave functions $\psi_j$ with disjoint supports, which makes the interelectronic repulsion energy small without cost of kinetic energy. The ground state of Helium thus will have its two electrons separated into two half-spheres with corresponding wave functions $\psi_1(x)$ and $\psi_2(x)$ meeting smoothly at a common separation surface. It is possible that this is the origin of the Zweideutigkeit or two-valuedness expressed in Pauli's exclusion principle, which Pauli did not like because it was ad hoc without rationale. The  sequence of posts on Quantum Contradictions explores atomic ground states based on the above wave function with surprisingly good correspondence with observations, see also Many-Minds Quantum Mechanics. We compare with standard quantum mechanics with multi-dimensional wave functions $\psi (x_1,…,x_N)$ depending on $N$ three-dimensional space coordinates $x_1$,…,$x_N$, typically in the form of a Slater determinant as a linear combination of products of $N$ functions $\psi_1$,…,$\psi_N$, each function separately depending on three space coordinates,  thus based on wavefunctions depending on altogether $3N$ space coordinates. Such multi-dimensional wave functions defy direct physical interpretation and are also impossible to compute for atoms with several electrons and thus do not belong to science. Yet they are supposed to be fundamental to atomistic physics. The standard view is that macroscopic and microscopic (atomistic) physics are fundamentally different,  because microscopic physics demands a multi-dimensional wave function, while macroscopic physics is described by systems of three-dimensional functions. If also microscopic physics can be described by systems of three-dimensional functions, as indictated above, then there will be no fundamental difference between macroscopic and microscopic physics and a major obstacle for progress can be eliminated. Computations based on wavefunctions of the above form are under way and will presented when available.  For simple hand calculations see here and here. PS1 For Helium with two electrons at distance $\frac{1}{2}$ from the kernel and mutual distance $1$ as an approximate ground state configuration energy in the above model, we get $E = -3$, to be compared with the observed $-2.903$. For Lithium with two electrons at distance $\frac{1}{3}$ from the kernel and mutual distance $\frac{2}{3}$ together with a third electron at distance 1 from an effective kernel of charge +1, we get $E = -8$, to be compared with the observed $-7.5$. The ground state energy of three electrons at distance $\frac{1}{3}$ from the kernel and mutual distance $\frac{1}{2}$, we get $E = -7.5$ indicating that the configuration with two electrons in an inner shell and one in an outer shell has smaller energy and thus is the actual ground state configuration for Lithium, thus obtained without reference to Pauli's exclusion principle. PS2 Recall that the standard quantum mechanics is formulated in terms of a multi-dimensional wave function $\psi (x_1,x_2,…,x_N)$ depending on $N$ three-dimensional space coordinates $x_1$,…$x_N$, altogheter on $3N$ space coordinates, which is devastating because both physical interpretation and computational determination is impossible. To reduce the dimensionality typically an Ansatz is made as Slater determinants of three-dimensional wave functions $\psi_i$ as linear combinations of products of the form (subject to permutations of the coordinates): • $\psi (x_1,…,x_N)=\psi_1(x_1)\psi_2(x_2)….\psi_N(x_N)$, leading to a set of one-electron wave equations coupled by complex exchange-correlation terms which are very difficult to determine. The above Ansatz with a sum instead of products of three-dimensional wave functions may offer more computationally managable and thus more useful models. PS3 For Beryllium with 4 electrons, we get $E=-14$ from 2 electrons at distance $\frac{1}{4}$ from the kernel with mutual distance $\frac{1}{2}$, together with $E = -\frac{2}{3}$ from 2 electrons of width $\frac{1}{2}$ at distance $\frac{1}{4} + \frac{1}{2}$ from an effective charge of +2, which gives altogether $E = -14.667 which is exactly what is observed!! PS4 For N electrons distributed over one shell at distance $\frac{1}{N}$ to the kernel assuming the average distance between any pair of electrons is $\frac{1}{N}$, we get $E = -\frac{N^2}{2}$, which is much larger than the observed $E \approx - N^2$ and thus is not the ground state configuration.  A multi-shell distribution in the model gives better agreement with observations and so the model may capture the real shell structure (without resort to any Pauli exclusion principle). PS5 Note that the above model allows discontinuous electron distributions (joining smoothy) without cost of kinetic energy which favors electron separation. We compare with Hartree models as systems of one-electron models with continuous electron distributions for which separation requires kinetic energy cost and a resort to Pauli's exclusion principle is necessary to prevent more than two electron distributions to overlay. PS6 To find the ground state, we can use time-stepping of the parabolic system • $\frac{\partial\psi_j(x,t)}{\partial t} = \Delta\psi (x,t) + \frac{N\psi (x,t)}{\vert x\vert}-\sum_{k\neq j}\int\frac{\psi_k^2(y,t)}{2\vert x-y\vert}dy\,\psi_j(t,x)$ for $t > 0$, $j=1,…,N$, with successive normalization to $\int\psi_j^2(x,t)\, dx=1$ after each time step and $\psi =\sum_{k=1}^N\psi_k$.  Further • $V_k\equiv\int\frac{\psi_k^2(y,t)}{2\vert x-y\vert}dy$, can be computed by solving $-\Delta V_k = 2\pi\psi_k^2$.
f0dd50b8a3fe139d
Numerical solution of PDE:s, Part 7: 2D Schrödinger equation Haven’t been posting for a while, but here’s something new… Earlier I showed how to solve the 1D Schrödinger equation numerically in different situations. Now I’m going to show how to calculate the evolution of a 2D wavepacket in a potential energy field that has been constructed to mimic the classical “two-slit experiment” which shows how the mechanics of low-mass particles like electrons can exhibit interference similar to the mechanics of classical waves (sound, light, water surface, and so on). A 2D Schrödinger equation for a single particle in a time-independent background potential V(x,y) is Where the particle mass has been set to 1 and the Planck’s constant to 2\pi. To solve this numerically, we need the Crank-Nicolson method, as was the case when solving the 1D problem. More specifically, the linear system to be solved is where the wavefunction now has two position indices and one time index, and the potential energy has only two position indices. To form a model of the two-slit experiment, we choose a domain 0 < x < 6; 0 < y < 6 and make a potential energy function defined by IF (x < 2.2 OR x > 3.8 OR (x > 2.7 AND x < 3.3)) THEN IF (3.7 < y < 4) THEN V(x,y) = 30 IF (x < 0.5 OR x > 5.5 OR y < 0.5 OR y > 5.5) THEN V(x,y) = 30 Otherwise V(x,y) = 0. which corresponds to having hard walls surrounding the domain and a barrier with two holes around the line y = 3.85 For an initial condition, we choose a Gaussian wavepacket that has a nonzero expectation value of the momentum in y-direction: An R-Code that solves this problem for a time interval 0 < t < 1 is library(graphics) #load the graphics library needed for plotting lx 3.8)||((j*dx>2.7) && (j*dx<3.3))) { if((k*dx>3.7) && (k*dx<4.0)) { V[j,k] = 30+0i #No significant density is going to go through these barriers } } if((j*dx>5.5) || (j*dx<0.5) || (k*dx>5.5) || (k*dx<0.5)) { V[j,k] = 30+0i kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices psi[(j-1)*nx+k] = as.complex(0) xaxis 5) P[l,m] = 2 for(l in c(1:(nx-1))) { for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation IP[4*l-3,4*m-3] = P[l,m] IP[4*l-2,4*m-3] = P[l,m]+0.25*(P[l+1,m]-P[l,m]) IP[4*l-1,4*m-3] = P[l,m]+0.5*(P[l+1,m]-P[l,m]) IP[4*l,4*m-3] = P[l,m]+0.75*(P[l+1,m]-P[l,m]) for(l in c(1:(4*nx))) { IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3]) IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3]) IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3]) jpeg(file = paste("plot_abs_",k,".jpg",sep="")) #save the image image(IP, zlim = c(0,0.15)) The code produces a sequence of image files, where the probability density is plotted with colors, as an output. Some representative images from this sequence (converted to grayscale) is shown below: A video of the time evolution is shown below: The treshold for maximum white color has been chosen to be quite low, to make the small amount of probability density that crosses the barrier visible. The discrete grid of points has been made quite coarse here to keep the computation time reasonable, and the resolution has been increased artificially by using linear interpolation between the discrete points. So, now we’ve seen how to solve the motion of 2D wavepackets moving around obstacles. In the next numerical methods post, I’ll go through the numerical solution of a nonlinear PDE. Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
fabe20b2be5f8d34
The Unabashed Academic 07 September 2016 Could dark matter be super cold neutrinos? Probably the greatest physics problems of the current generation are the cosmological questions. Thanks to the development of powerful new telescopes (many of them in space) in the last years of the twentieth century, startling new and unexpected results have pointed the way to new physics. These currently go under the names of "dark matter" and "dark energy", but those aren't real descriptions; rather they are suggestions for what might provide theoretical solutions to experimental anomalies. And, as naming often does, they guide our thinking into explorations of how to come up with new physics. The problem that "dark matter" is supposed to resolve began in the 1970s with the observations of Vera Rubin. By making a careful analysis of the motion of stars in galaxies, she found an unexpected anomaly. As any first year physics student can tell you, Newton's law of gravitation tells you how planets orbit around the sun. The mass of the sun draws the planets towards it, bending their velocities ever inward in (nearly) circular orbits. The mathematical form of the law produces a connection between the distance the planets are from the sun and the speed (and therefore the period) of the planets. That connection was known empirically before Newton to Kepler (Kepler'sthird law of planetary motion: the cube of the distance from the sun is proportional to the square of the planet's period). The fact that Newton's laws of motion together with his law of gravity explained that result was considered a convincing proof of Newton's theories. A galaxy has a structure somewhat like that of a solar system. There is a heavy object in the center – a massive black hole – that is responsible for most of the motion of the stars in the galaxy. Rubin found that the speed of the stars around the center didn't follow Kepler's law. The far out stars were going too fast. This suggested that there was an unseen distributed mass that we didn't know about (or that Newton's law of gravity perhaps failed at long distances; In my opinion this option has not received enough attention, though that's for another post.). Observations in the past thirty years have increasingly supported the idea that there is some extra matter that we can't see – and a lot of it. More than the matter that we do see. As a result, a growing number of physicists are exploring what might be causing this. I saw a lovely colloquium yesterday about one such search. Carter Hall, one of my colleagues in the University of Maryland Physics Department, spoke about the LUX experiment. This explores the possibility that there is a weakly interactive massive particle (a "WIMP") that we don't know about – one that doesn't interact with other particles electromagnetically so it doesn't give off or absorb light, and it doesn't interact strongly (with the nuclear force) so it doesn't create pions or other particles that would be easily detectable in one of our accelerators. This would make it very difficult to detect. The experiment was a tour de force, looking for possible interactions of a WIMP with a heavy nucleus – Xenon. (The interaction probability goes up like the square of the nuclear mass so a heavy nucleus is much more likely to show a result.) The experiment was incredibly careful, ruling out all possible known signals. It found no results but was able to rule out many possible theories and a broad swath of the parameter space – eliminating many possible masses and interaction strengths. An excellent experiment. But as I listened to this beautiful lecture, I wondered whether the whole community exploring this problem hadn't made the mistake of looking under the lamppost for our lost car keys. It's sort of wishful thinking to assume that the solution to our problem might be exactly the kind of particle that would be detectable with the incredibly large, powerful, and expensive tools that we have built – particle accelerators. These are designed to allow us to find new physics – in the paradigm we have been exploring for nearly a century: finding new sub-nuclear particles and determining their interactions in the framework of quantum field theory. This reflects a discussion my friend Royce Zia and I have been having for five decades. Royce an I met in undergraduate school (at Princeton) and then became fast friends in grad school (at MIT). We spent many hours there (and since) arguing about deep issues in physics. We both started out assuming we wanted to be elementary particle theorists. That, after all, was where the action was. Quarks had just been proposed and there was lots of interest in the nuclear force and how to make sense of all the particles that were being produced in accelerators. But we were both transformed by a class in Many Body Quantum Theory given by Petros Argyres, a condensed matter theorist. In this class we saw many (non-relativistic) examples of emergent phenomena – places where you knew the basic laws and particles, but couldn't easily see important results and structures from those basic laws. It took deep theoretical creativity and insight to find a new way of looking at and rearranging those laws so that the phenomena emerged in a natural way. There are many such examples. The basic laws and particles of atomic and molecular physics were well known at the time. Atoms and molecules are made up of electrons and nuclei (the structure of the nuclei is irrelevant for this physics – only their charge and mass matters) and they are well described by the non-relativistic Schrödinger equation. But once you had many particles – like in a large atom, or a crystal of a metal – there were far too many equations to do anything useful with. Some insight was needed as to how to rearrange those equations so that there was a much simpler starting point. Three examples of this are the shell model of the atom (the basis of all of chemistry), plasmon oscillations in a metal (coherent vibrations of all the valence electrons in a metal together), and superconductivity (the vanishing of electrical resistance in metals at very low temperatures). Each of these were well described by little pieces of the known theory arranged in clever and insightful ways – ways that the original equations gave no obvious hint of in their structure. I was deeply impressed by this insight and decided that this extracting or explaining phenomena from new treatments of known physics was just as important – as just as fundamental – as the discovery of new particles or new physical laws. Royce and I argued this for many hours and finally decided to grant both approaches the title of "fundamental physics" – but we decided they were different enough to separate them. So we called the particle physics approach "fundamental-sub-one" and the many-body physics approach "fundamental-sub-two". (Interestingly, both Royce and I went on to pursue physics careers in the f2 area, he in statistical physics, me in nuclear reaction theory.) In the decades since we had these arguments, physics has made huge progress in f2 physics – from phase transition theory to the understanding and creation of exotic (and commercially important) excitations of many body systems. So yesterday, I brought my f2 perspective to listening to Carter talk about dark matter and I wondered: He was talking all about f1 type solutions. Interesting and important, but could there also be an f2 type solution? We already know about weakly interacting massive particles: neutrinos. They only interact via gravity and the weak nuclear force, not electromagnetically or strongly.  Could dark matter simply be a lot of cold neutrinos? They would have to be very cold – travelling at a slow speed – or else they would evaporate. When we make them in nuclear reactions in accelerators they are typically highly relativistic – travelling at essentially the speed of light. The gravity of the galaxy wouldn't be strong enough to hold them. That leads to a potential problem for this model. Whatever dark matter is, it has to have been made fairly soon after the big bang – when the universe was very dense, very uniform, and very hot -- hot enough to generate lots of particles (mass) from energy. (Why we believe this is too long a story to go into here.) So you would expect that any neutrinos that were made then would be hot – going too fast to become cold dark matter. But suppose there were some unknown emergent mechanism in that hot dense universe -- a phase transition -- that squeezed out a cold cloud of neutrinos. Neutrinos interact with matter very weakly – and their interaction strength is proportional to their energy so cold neutrinos interact even more weakly than fast neutrinos. If there were a mechanism that spewed out lots of cold neutrinos, I expect they would interact too weakly with the rest of the matter to come to thermal equilibrium. If the equilibration time were, say, a trillion years, they would stay cold and, if their density were right, could serve as our "dark matter". Most of the experimental dark matter searches wouldn't find these cold neutrinos. Searching for them at this point would have to be a theoretical exploration: Can we find a mechanism in hot baryonic matter that will produce a phase transition that spews out lots of cold neutrinos? I don't know of any such mechanism or where to start, but wouldn't it be fun to consider? 19 May 2016 Still a physicist! Thanks, Emmy Noether Recently while browsing my FaceBook feed, I was tempted to take one of the BuzzFeed quizzes that regularly pop up. Usually, I'm immune to this kind of clickbait, not really being interested in "Which American Idol judge are you?" or "Which Game of Thrones character are you like?" (Though as a frequent traveler, I do often do the ones that ask, "How many states have you visited?" or "How many of the top 150 world travel sites have you seen?") This one asked, "Are you more of a physicist, biologist, or chemist?" This was clearly a quiz for scientists and, though I'm a lifelong physicist (practicing for 50 years), I've always been a "biology appreciator", collecting Wildlife Stamps as a boy, and reading Stephen J. Gould, E. O. Wilson, Konrad Lorenz, and lots of other as an adult. And for the past half dozen years or so, I've been holding many conversations with multiple biologists and learning some serious bio in the service of carrying out a deep reform on algebra-based physics to create an IPLS (Introductory Physics for Life Scientists) class – NEXUS/Physics. I wondered whether I had been sufficiently infected with biology memes to have gone over to the dark side. I needn't have worried. As expected, I came out "Physicist". Their description of a physicist was one I liked and that describes my favorite physicists (and I hope me too): "You’re a thinker who loves nothing more than getting stuck into a good intellectual challenge. You love to read, and you’ve got so much information (useless and otherwise) stored in your brain that everyone wants to have you on their pub quiz team. Physics suits you because it lets you spend your time contemplating some of the smallest and biggest things in the universe, and tackle some really huge questions while you’re at it." But I particularly found one item in the quiz interesting: "Select a real scientist." They offered three female scientists: Emmy Noether, Jane Goodall, and Rosalind Franklin. Although I assume that they matched Emmy to Physics, Jane to Biology, and Rosalind to Chemistry, I think of both Goodall and Franklin as biologists. I have read some of both of their work – one of Jane Goodall's books on chimpanzees (and I regularly contribute to her save the chimps foundation), and Rosalind Franklin's paper on X-ray diffraction from DNA crystals. I've never read any of Emmy Noether's original writings, but her work was introduced into my physics classes in junior year and had a powerful impact on my thinking about the world and about physics. That's what I want to talk about here. [But first, I'm inspired to make one of my typical academic digressions about a topic I've been thinking about: the structure of biological research. Reading E. O. Wilson's memoir, Naturalist, clarified for me a lot of what I have been seeing in my recent conversations with multiple biologists. I refer to this as "the Wilson/Watson abyss". About 1960, E. O. Wilson and J. D. Watson were both new Assistant Professors in the Harvard Biology Department. Over the next few years they engaged in a fierce battle for the soul of biology. What were the key issues for biology research for the next few decades? E. O., a field biologist rapidly becoming the world's greatest expert on ants, argued vigorously for a holistic approach: looking at whole animals, their behavior, how they interacted with others and their environments. J. D., fresh off his success in deciphering the structure of DNA and offering a molecular model for evolution, argued vigorously for a reductionist approach: studying the molecular mechanism of biology and the genome. The result was a split into two departments, and, essentially, a victory for Watson. Although there is excellent research in both areas, for the past half century, the strongest focus has been on microbiology and molecular models. Premier biology research institutes are often entirely focused on molecular and cellular biology and far more funding goes into that area. I personally think this is a problem and that the critical biological problems for the next half century are going to be that we HAVE to understand the systemic aspects of ecology – both for our interaction with the planet and even for medicine (through consideration of the human as an ecosystem by including our microbiome and the implications of social and environmental interactions on it). Of course this digression is inspired by the choices of Jane Goodall – a premier field biologist in the Wilson model (though she came through anthropology as a student of Louis Leakey), and of Rosalind Franklin – a premier biochemist in the Watson model (and her work was critical in allowing the Watson-Crick breakthrough). An interesting point for another post, is to note that evolution is the bridge that spans the Wilson/Watson abyss. Evolution is not a hypothesis or even really a theory, but rather a conclusion that grows out of a number of fundamental principles based strongly in observation and experiment: heredity (through DNA and its copying mechanism), variation, morphogenesis (the building of a phenotype – the individual organism – from the genomic info), and natural selection. (One might choose a different set, but this is one I like so far.) The first lies firmly on the Watson side, the last on the Wilson side. You can't make sense of evolution unless you are willing to consider both ends.] We now return to our main program. Why did I pick Emmy over Jane and Rosalind, both of whose work I have actually read and I think are immensely important? The reason is because for me as a physicist, Emmy Noether's result was a total game changer for me in the way I think about physics, the epistemology of physics, and how the world works. To state her result crudely in a way that the non-mathematician might understand, Noether's theorem says: If you have a system of interacting objects whose behavior in time is governed by a set of equations that have a symmetry, then you can find a conserved quantity. By a "symmetry", she means that you can change something about your description that doesn't change the math. By a "conserved quantity" she means something you can calculate that doesn't change as the system changes through time. (Of course Noether's theorem is a mathematical statement and there are conditions and a process to find the conserved quantity, but that requires a lot of math to elaborate. I refer you to the Wikipedia article on Noether's theorem for those who want the details. Warning: It requires knowledge of Lagrangians and Hamiltonian – junior level physics.) This is a little dense. Let's take an example or three to see just what it means. Suppose I have a set of interacting objects – something like the planets in the solar system interacting via gravity, or a set of atoms and molecules interacting via electric forces. We can describe these interactions either using forces or energy. (These approaches can be shown to be mathematically equivalent, though each tends to foreground different ways of thinking about the system.) The key is that the interactions of the objects only depend on the distances between them. This means that I can choose any coordinate system to describe the system: I can put my reference point – the 0 of my coordinates or origin – anywhere I want. Whatever origin I choose, the distance between two objects is the difference of the positions of those two objects and when you subtract their positions to get their relative distance, the position of the origin cancels. This is a symmetry. The equations that describe the motion of the system do not change depending on the position of the origin of the coordinate system. You can choose it as you like – and we typically pick an origin that will make the calculation simpler. This symmetry is called translation invariance. It means you can shift (translate) the origin freely without anything changing. But what Noether's theorem shows is the symmetry doesn't just mean we are allowed to choose the coordinate system that makes the calculation simpler, it says there is a conserved quantity and it allows you to find and calculate it. In the case of translation invariance, Noether's conserved quantity is momentum – in most cases, the product of the mass and velocity for each object. You calculate the momentum of each object in the system, add them up at one time, and for any later time you will always get the same answer, no matter how the objects have moved, even though the motions may be amazingly complicated – and may involve billions of particles! This is immensely important and has powerful practical implications. One technical example is, "How can you figure out how protons move inside a nucleus or electrons move inside an atom?" In the case of protons, you don't actually know exactly what the force law between two protons is (though there are lots of models), but we are pretty sure that they only depend on the distance between them.* But we can shoot very fast protons at a nucleus. Sometimes they will strike a proton moving in the nucleus and knock it out. If we measure the momenta of the two outgoing protons, and since we know the momentum of the incoming proton, we can infer the initial momentum of the struck proton inside the nucleus using momentum conservation. We then do a lot of these scatterings and get a probability distribution for the velocities of protons inside the nucleus.  Since we do know the force between electrons and the nucleus (the electric force), this technique is extremely powerful for studying the structure of atoms and molecules. While this seems rather technical, we'll see that there are even more important implications that providing a measurement tool for difficult to observe quantum systems. Two other fairly obvious symmetries in our description of systems are: • ·            Time translation invariance • ·            Rotational invariance The first, time translation, means that it doesn't matter when you start your clock (what time you take as 0 of time). This is true for most dynamic models in physics. Gravitational forces don't depend on time and neither do electrical ones. Since these are the two forces that dominate everything bigger than a nucleus, this symmetry holds for everything from atoms up to galaxies (where there are some as yet unsolved anomalies). Emmy's theorem says that due to the time translation symmetry there is a conserved quantity – in this case energy. The second, rotational invariance, means that it doesn't matter in which direction you point your axes. You can take the positive x direction as being towards the north star or towards the middle star of Orion's belt. (You want your coordinates to be fixed in space, not rotating with the earth or you introduce fake forces like centrifugal force and Coriolis forces.) The conserved quantity that goes with this is angular momentum, another useful principle (though more complicated to use because of more vectors). OK. That tells us what Noether's theorem tells us – about important conservation laws like (linear) momentum, energy, and angular momentum. But we learn about these in introductory physics classes without needing a sophisticated theorem. What does it add? For me, it adds something deeply epistemological – something fundamental about what we know in physics and how we know it. It shows that two very different things are tightly related: how we are allowed to describe the system at a given instant of time without changing anything (where we can choose our space and time coordinates) – a purely static statement about what kinds of forces or energies we have – and how the system moves in time – a dynamic statement about how things change. This is immensely powerful. This means that if I have created a mathematical model of a system and I find that energy is NOT conserved, I know that either I have made a mistake, or I have assumed interactions that change with time. If I find that momentum is NOT conserved, I know that I must have tied something to a fixed origin rather than to a relative coordinate between two objects. Now this isn't always wrong or bad. If I have a particle moving through a vibrating fluid I might want to treat the fluid like a fixed time dependent potential energy field. What this will mean is that the energy of my particle will not be conserved and where the energy goes (into the fluid) will not be correctly represented in this model.  A more common example is projectiles or falling bodies. Since the earth is so much larger than our projectiles we take the origin of our coordinates as a fixed point on the earth instead of taking the force as depending (as it actually does) on the distance between the center of the earth and the projectile. This means we won't see momentum conserved since we have fixed the earth. Momentum transfer to it will not be correctly represented. This might not matter depending on what we want to focus on. But what Noether's theorem shows us is that there are powerful – and absolute – links between two distinct ways of thinking about complex systems: the structure of the mathematical models we set up to describe the evolution of systems and characteristics of how those systems evolve in time. And that the result can be something as powerful and useful as a conservation law blew me away. More, that we now know exactly what characteristics of a mathematical model leads to a conservation law! There is nothing analogous to this in biology or chemistry – except as it is inherited from Noether's theorem in mathematical models biologists or chemists build or as they use energy or charge conservation. But as far as I can tell they rarely pay attention to conservation laws – even when they might do them some good. It also showed me that when you build mathematical models you occasionally hit the jackpot: you get out more than you thought you put in. Extensions of Noether's theorem to other symmetries have become a powerful tool in constructing new models of dynamics. Instead of trying to invent new force laws, we look experimentally for conservation laws, find symmetries that can give those conservation laws, and construct new dynamical models by putting together variables that fit the symmetry. This is the way much of particle physics has functioned for the past 50 years. So that question on the quiz is probably the best selector of the "physicist" category. Goodall and Franklin both did essential and pivotal work in their fields; but Noether's was a core pillar for all of 20th century physics and for me, won hands down. Thanks Emmy! 12 March 2016 Congratulations, Bernie! Congratulations, Bernie, on a surprise win in the Michigan primary! But my Bernie-phile friends: Please don't fall for the bad cognitive errors I've seen some supporters distributing in responses: binary and one-step thinking, and being misled by inapt metaphors. First, "a win-is-a-win" carries a lot of associational baggage, some of which may be true but which is certainly worth some careful analysis, but it's a binary thinking error. In Michigan, Bernie beat Hillary by 1.5% of the vote. A win, right? But in delegate count – what matters in this primary election – Hillary took 70 and Bernie 67, increasing her lead. For the primaries and for the election as a whole, one needs to keep in mind that we live in a republic, not a democracy. That means we elect a representative government, and do not directly elect a president. Winning the total popular vote is not the point (just ask Al Gore) and this is reflected in both the Democratic and Republican primaries, though in different ways. To see how this works, consider three districts of 10,000 voters. The winner of each district gets a delegate. Suppose candidate H wins two districts by 6,000 to 4,000 and candidate B wins one district by 9,000 to 1000. Candidate H gets a total of 13,000 votes, while candidate B gets a total of 17,000. A big popular vote margin for B (57% to 43%) but a win for H (2 to 1). While this feels unfair, it's a way of guaranteeing that the political process requires coalition building among diverse sub-populations. We're seeing this in Bernie and Hillary's struggle to get the votes of different ethnic groups, different age groups, and different economic classes. In a parliamentary democracy with many parties, like in many European countries, this plays out by having to build coalitions among parties. In the USA with only two parties, the coalitions are built at this stage. I don't think this is a bad thing, as I think the strength of America is our ability to (sometimes gingerly) bring together many different viewpoints, ethnic groups, and cultures, and get them to live together in reasonable harmony without frequent tribal and inter-group violence (so far). (Sorry, Black Lives Matter, I'm not trying to belittle your legitimate claims about inter-group violence in the US, only to point out that while horrible it has not reached the level of open warfare and we seem to be finally bringing it into the open enough to possibly make some positive progress.) Second, well, but "it's an unprecedented upset." This one-step thinking also carries a lot of associational baggage: it means "momentum"! Look at the derivative! That implies big change. Well, perhaps, but one learns in science that projecting derivatives is a tricky and unstable business. (See Mark Twain's quote on the growth of the Mississippi Delta.) Also, the "upset" depends on the difference between a poll and an election. An election is the event: its result is what it is (modulo errors, cheating, hanging chads, etc.) The poll is a sample that is much more akin to a measurement in physics. This plays quite well with stuff I teach in my physics class about measurement. A measurement in physics is also a sample: an attempt to determine the property of something by "tasting" it – taking a little bit in a way that you can analyze the sample and not change the object being measured. Consider a thermometer as an example. When I'm poaching a salmon for a dinner party, I put a thermometer in my salmon poacher to test the temperature and find out how hot the water is. My students often assume "a measurement is a measurement and gives a true value", but it doesn't work this way. A measurement is simply a conjoining of two physical systems. What makes it a measurement is a set of theoretical assumptions about the process of their interaction. In the thermometer case, we assume: ·            The zeroth law of thermodynamics: Energy will move between two objects in thermal contact in a direction to equalize their temperature (thermal energy density). So energy flows from a hot object into a cold until they are the same temperature. This says we expect our thermometer to extract energy from the water until it is the same temperature as the water. ·            The probe does not affect the state of the measured object significantly: The thermometer removes some energy from the water and so reduces its temperature. We assume that it only takes a little and that reduction can be neglected. If I used my big poacher thermometer in an espresso cup to see if it was too hot, the temperature the thermometer reads would not be the original temperature of the coffee but something partway between. ·            The probe has a linear response: We calibrate our thermometers by placing them in melting ice and putting a mark 0 oC and then in boiling water and placing a mark at 100 oC. The bimetal in the coil (or the liquid in the thermometer) expands as it gets hot and shifts the marker on the dial. We assume that halfway between those points is 50 oC and so on, but that isn't necessarily the case. It could expand more when it's colder and slow down when it gets hotter. Thermometers are carefully analyzed and can be trusted when used appropriately. (A similar analysis holds for voltmeters and ammeters.) But the point is: When we make a measurement it depends on theoretical assumptions about how our system is working. What does this have to do with polls? Well, a poll is a sample. A few voters are chosen to stand for the full population. The sample is too small to be chosen randomly: the error would be too large. So typically polls begin with a model of the electorates demographics: who does the voting population consist of and which of those are likely to actually vote in the election. These are often based on previous similar elections. But Michigan has not held a truly competitive Democratic primary in a long time. 2012 Obama was unopposed. In 2008, Michigan tried to slip forward in time so as to be more important, and the DNC stripped half their delegates. Many of the candidates (including Obama) refused to campaign. The two previous primaries were caucuses. So it may be that there is a tidal wave of surprise support for Bernie. But it could also be that the Michigan polls were based on crappy models. A failure of polling yes, but not representing a shift in support. The way we will tell is if somewhat similar states such as Illinois and Ohio that have had more recent contested primaries, and where primaries are held next week, also show significant underpolling for Bernie or not. I am willing to wait and see. Third, I'm afraid I'm seeing a lot of "Cinderella underdog" metaphors; the idea that somehow the election is like a basketball tournament. You just have to keep winning the popular vote. But because of the electoral college this is a terrible metaphor and leads us astray. As Democrats we want to win the presidency. To do so we need a path to 270 electoral votes and since those states are almost all (except, I think Nebraska and New Hampshire) winner take all, it takes a careful analysis of an electoral strategy; how an where to devote resources to get out the vote – and which populations to concentrate on. This is where the great detail we are getting in the Democratic primaries can help us. And it is why "national polls" of one candidate against the other are, especially this early in the game, essentially useless. Not only do these show dramatic swings as the candidates face off against each other, they don't take into account the actual election mechanism. If neither candidate gets a majority of the delegates as a result of the primaries (there are all those "superdelegates" or SDs), here's what I hope would happen. The SDs would all throw away their current commitments and turn to the Quants – the quantitative analysts who would make models of the presidential election based on various models of the electorate and the details of the primary results in the various states. There would be a spread (spray) of results – similar to what you see for paths of a hurricane – because of different assumption plus random factors. The SDs would then use their personal knowledge of their own districts to evaluate those models and make their choices. That seems to me a good reason to have SDs. Maybe I'm dreaming to hope that things would work out this way and choose the best choice for the fall election based on a detailed analysis of what we have learned from the primaries, but I'm a bit afraid that the SDs would look to support their personal interests rather than the interests of the party. I'm sure that wouldn't be true of my SDs – representatives whom I voted for and like very much. It's just all those other folks you voted for! In any case, I will actively support whoever appears to have the best likelihood of winning the actual election, based on a careful analysis of our country's complex voting problems, not based on my agreement with their program (Bernie 98% to Hillary 94%), nor on my assessment of who is likely to be a more effective president in practice (Hillary 4: Bernie 1). I am very dismayed at the direction the Republican party has been trending over the past 35 years and it seems to be getting worse and worse. (Full disclosure: I voted for Republicans in New York State Senate elections in the 1960's but have never voted for a Republican presidential candidate.) So to my Bernie-phile friends who say he can win, I say, OK, show me! I'm watching! 23 November 2015 My teaching philosophy I got my teaching position decades ago, long before anyone started to ask candidates to write a "Teaching Philosophy." I recently had to create one for an application for internal University funding. Despite having written about teaching for decades (I wrote a small book about it), I found it an interesting challenge to try to condense it all into a page-and-a-half.  For your amusement, here it is. My teaching philosophy is based on nearly 45 years of teaching students at the University of Maryland and more than 20 years of carrying out Discipline Based Education Research with students attempting to learn physics. It is also informed by my readings of the literature in education, psychology, sociology, and linguistics. My teaching philosophy grows out of a few basic principles: • It's not what the teacher does in a class that determines learning, it's what the students do. Learning is something that takes place in the student. And deep learning – sense making – involves more than just rote. It involves making meaning: making strong associations with other things that the students already know and organizing knowledge into coherent and usable structures. • I can explain for you, but I can't understand for you. Students assemble their responses to instruction from what they already know – appropriately or inappropriately. This can lead to what appear to be preconceptions that are incorrect and robust. Note, however, that these may be created “on the fly” in response to new information that is being presented. • Students' expectations matter. The expectations that students have developed about knowledge and how to learn (epistemology), based on previous experiences with schooling, are extremely important. Their answers to the questions, "What's the nature of the knowledge we are learning? [e.g., facts or productive tools?] What do I have to do to learn it? [e.g., memorize or sense-make?]" may matter as much or more than the preconceptions they bring in about content. • Science is a social activity. I'm teaching science, and science is all about how we know what we know. This is decided not by some algorithm but by a social process of sharing results, mutual evaluation, peer review, criticism, and discussion. Presenting a set of results to be repeated back is not science. Learning to do science means learning to participate in scientific conversations. These lead me to rely heavily on a number of fundamental teaching guidelines: 1.   Minds on – Look for activities that will engage the student's thinking and relevant experiences, making connections to things they know and are comfortable with. 2. Active engagement – Set up classes so that there is more for students to do, less listening. 3. Metacognition – Encourage students to be more explicit about their thinking, planning, evaluating. As a teacher, be explicit about your thinking and why you are asking them to do what you are asking them to do. 4. Enable good mistakes – Mistakes that you can learn from are "good mistakes." Set up situations where your students will learn to think about their thinking and how to debug their errors – but do it supportively with some but not too much penalty for errors. 5. Group work – Create situations where students are expected to discuss scientific ideas with their peers, both in and out of class. And finally 6.  Listen!To create the activities described above, you need to know how students are responding. Therefore, set up situations that will let you hear what students are thinking and doing. These ideas lead to my using lots of explicit techniques in class, including: having students read text and submit questions before class, asking challenging (and sometimes intentionally ambiguous) clicker questions followed by discussions of "why" and "how do we know", facilitating lots of group discussion and "find someone who disagrees with you and see if you can convince them" as part of each class session. And encouraging students to ask for regrades on quizzes and exams, and offering second-chance exams, among others.  My experience with all this leads me to three concluding overarching ideas. Diagnosis – When I first began teaching (for the first 30 years or so), if a student asked me a question, it was my instinct to answer it. In doing so I was using my experience as "the good student" and had not transitioned to being "the teacher". I had to learn that being the good student was no longer my job. My job was not necessarily to answer the student's question, but rather to consider, "Why couldn't this student answer this question for him/herself despite my having taught the material in class?" My job is in part to diagnose the students' difficulty, not answer their question. That requires a dramatically different interaction with my students. And learning when to answer a question directly (sometimes the right thing to do) is subtle. Respecting different perspectives – In the past five years, working closely with students from a different discipline than my own, I have learned that many views that seemed to me bizarre or just plain wrong, were actually well-justified in appropriate contexts. I have also learned from these same students that many of the approaches and results I took for granted and was used to teaching in my own discipline had hidden assumptions and required perspectives that were unnatural if not looked at with an expert's knowledge and the context of longer term implications and applications. Responsive teaching – Everything comes together in a fundamental overarching and unifying guideline: Listen to your students. Understand how they are interpreting and understanding (or misunderstanding) what you are teaching. Respect their views and what they bring to class, and respond by adjusting your instruction to match. This doesn't mean giving up your own view of what you want to teach or want them to learn. It means developing a good understanding of where they are and how you can help them get to where you want them to be.
fb361468f3b5235e
Dismiss Notice Join Physics Forums Today! Question about schrodinger equation 1. Dec 11, 2006 #1 Note that this is not homework... im just curious The time independant Schrodinger equation can be written as [tex] \hat{H} \psi = E\psi[/tex] IS there ever a time taht the above equation is not true?? what about the time dependant case?? We havent gone over that in class so im not quite sure about the cases. Thanks for your input! 2. jcsd 3. Dec 11, 2006 #2 User Avatar Staff Emeritus Science Advisor Gold Member The time-independent SE gives you those specific states that are stationnary. If you're looking for those states, then that equation is for you :smile: It is not a general solution to a quantum problem, but looks as specific kinds of cases. However, it also turns out that the solutions of this equation (which is simply the eigen-equation of the hamiltonian in fact) allows you to find the more general time-dependent solutions. As an analogy, consider it something like the static equilibrium equation F = 0 in Newtonian physics. It is not the general dynamical solution dp/dt = F, but only gives you the solutions of static equilibrium. However, here, it is less clear how the static solutions can help you find the general dynamic solutions, while in the quantum case, there is a clear link. 4. Dec 11, 2006 #3 As vanesch says, it only satisfies the stationary states. What it does is gives you the states of definite energy (which are useful for several reasons), in this case the energy is E. Compare with which works for states of definite momentum, specifically of momentum p. Both of these are the eigenvalue equations for the respective operators. The time dependent equation (the Schrodinger equation proper) is given by [tex]\hat{H}| \psi \rangle = i\hbar \frac{d}{dt} | \psi \rangle[/tex] and this holds for any quantum state. One can almost (problems arise with spin-statistics) view this equation as defining which [itex]| \psi \rangle[/itex] are allowable quantum states. In the case of time-independent Hamiltonians, we can separate the time dependence out, as I'm sure you will soon find out in class. 5. Dec 11, 2006 #4 The answers above are great, but here are some shorter ones. The above equation does not hold for time-dependent cases. i.e. when the wave function changes shape over time. A simple example is a particle moving through empty space (as a plane wave). To make the time dependent form we replace the constant E with an expression which can handle the variations over time [itex]\mathrm{i} \hbar \frac{\partial \psi}{\partial t}[/itex]: [tex] \hat{H} \psi = \mathrm{i} \hbar \frac{\partial \psi}{\partial t}[/tex] AFAIK, this is identical to the form given in the previous posts with kets. Hope this helps. 6. Dec 11, 2006 #5 so stationary states are defined at hte square modulus of the wave function not having a time dependance. How would we know this before solving for the wavefunction? If it was time independant then certain [itex] \psi [/itex] would not depend on time would this mean taht all solutions of the TISE (time indepdnant schrodinger equation) are stationary states?? Since the solution for the time dependance is always [itex] \exp(-iEt/\hbar) [/itex]. If we could do that then certainly our potential is not time varying. Hence there's never a time depdance and the state is stationary always? 7. Dec 11, 2006 #6 We don't, but if we do know then we can use that to simplify the problem. If we try and solve a dynamic problem with the time-independent equation then we should eventually end up with some nonsensical expression. It should be quite obvious when to use which one. Since ground states shouldn't change spontaneously we can use the time-independent equation to approximate them. If we wish to model an evolving state then we need to use the time-dependent form. I think I may have made a mistake in the formula somewhere though. I keep thinking that [itex]\frac{\partial \psi}{\partial t}[/itex] is equal to zero when nothing is changing over time, so that we should have [itex]\hat{H} \psi = 0[/itex] for the time-independent form. So, I'm not sure exactly why the E's are not zero... probably some oversight on my part... 8. Dec 11, 2006 #7 Time-independence of a physical system, only requires the square modulus of the wavefunction to be time-independent, not the wave function itself. This means that the wave function itself can vary with time, although only in a way that does not change its sqaure modulus, which means that it can only change by phasing, in which case [itex]\hat{H} \psi = E \psi[/itex] 9. Dec 11, 2006 #8 Ah, of course, not sure why I forgot that, it is even mentioned already in this thread. Its also pretty damned important for calculations... oh well. 10. Dec 11, 2006 #9 change the phasing... where is the pahse factor appearing in the wave equation? Lets say for the wavefunction of hte infinite square well [tex] \Psi(x,t) = N \sin\frac{n\pi x}{a} e^{-\frac{\i E_{n}t}{\hbar}} [/tex] well a wave can also be written like [tex] Asin(kx-\phi) + Bcos(kx-\phi) [/tex] i dont quite see how the phi factor comes about in the wave equation... where would it be?? could it be like this? [tex] \Psi(x,t) = N \sin\left(\frac{n\pi x}{a}+\Phi\right) e^{-\frac{\i E_{n}t}{\hbar}} [/tex] 11. Dec 11, 2006 #10 User Avatar Staff: Mentor You can add an arbitrary constant complex phase factor to a QM wave function, without changing the physics: [tex] \Psi(x,t) = N \sin\left(\frac{n\pi x}{a} \right) e^{-i \left( \frac{ E_{n}t}{\hbar} + \phi \right)} [/tex] Physically, this corresponds to the fact that the "zero point" of potential energy (and therefore also of total energy) is arbitrary. Note what happens when you set [itex]V(x) = V_0[/itex] (a constant) inside the infinite square well, and then solve the Schrödinger equation again. Last edited: Dec 11, 2006 12. Dec 11, 2006 #11 then wouldnt we have to consider different situations for the solution suppose E> V0, E<V0, E=V0?? three possible solutions then right? Just wanna make sure before i go ahead with this. The wavefunction would still ahve to vanish at the boundary points, though. 13. Dec 11, 2006 #12 User Avatar Staff Emeritus Science Advisor Gold Member Except that it won't give you a constant phase factor, but one of the form [tex]e^{-i V_0 t/\hbar}[/tex] 14. Dec 11, 2006 #13 User Avatar Staff: Mentor Oops, you're right. :blushing: I should have realized that "relocating" the bottom of the well would simply change [itex]E_n[/itex] to [itex]E_n - V_0[/itex]. The arbitrary [itex]\phi[/itex] simply indicates the "initial value" of the complex oscillation, and doesn't correspond to anything physical. Any time you calculate an expectation value or probability, it always contributes a net factor of [itex]e^{i \phi} e^{-i \phi} = 1[/itex]. This is true only for a [itex]\phi[/itex] that doesn't depend on x, of course. If [itex]\phi[/itex] depends on x, that is, the initial phase is different at different locations instead of being uniform everywhere, things get interesting! :biggrin: 15. Dec 11, 2006 #14 User Avatar Staff: Mentor In principle yes, but you already have to do that when [itex]V_0 = 0[/itex], i.e. consider the cases E > 0, E < 0, E = 0. For [itex]E < V_0[/itex] you'll find that it's impossible to construct a wave function that vanishes at both sides of the well. Have you done the finite square well ([itex]V = V_0[/itex] outside the well, and 0 inside) yet, or tunneling through a barrier (V = 0 outside and [itex]V = V_0[/itex] inside)? These are the situations where people usually first tackle [itex]E < V_0[/itex]. 16. Dec 12, 2006 #15 yes i have done the tunnelling cases so if its impossible to make the wavefunction vanish at the end points then what boundary conditions could we apply?? 17. Dec 12, 2006 #16 User Avatar Staff: Mentor We don't have any choice in the boundary conditions. The boundary conditions are fixed by the choice of potential. For an infinite square well, we must require that [itex]\psi = 0[/itex] at the boundaries. If a general mathematical solution of the S.E. with a particular E cannot satisfy that condition, then it is not admissible as a physical solution for this situation. As you've seen in the tunnelling case, the general solution for E < V involves real exponentials: [itex]\psi = Ae^{kx} + Be^{-kx}[/itex]. There's no way to set the arbitrary constants A and B so that [itex]\psi = 0[/itex] at both boundaries of the infinite square well. Therefore it is impossible to have E < V for that situation. 18. Dec 13, 2006 #17 Oh i have another question about this Is this always true?? [tex] \hat{H} \Psi = E \Psi [/tex] Here we talk about the wavefunction in general I think this is not always true ... but i dont know understand why?? I m thinking that it ahs something to do with the time dependant case where the right is [tex] i\hbar \frac{d}{dt} \Psi [/tex] this term need not repsent the energy... right? 19. Dec 14, 2006 #18 User Avatar Science Advisor Homework Helper You've already been explained the difference between a time-dependent Hamiltonian in Schroedinger picture and a time-independent one. And how that determins the time-evolution of the state vector. is a spectral equation for an operator on separable Hilbert space and nothing more. It has solutions in the Hilbert space iff the spectrum of the Hamiltonian is discreet. Have something to add?
056483605f1dbda1
Take the 2-minute tour × Following up on the previous MO question "Are there any important mathematical concepts without discrete analogue?", I'd like to ask the opposite: what are examples of notions in math that were not originally discrete, but have good discrete analogues? While a few examples arose in the answers to that earlier MO question, this wasn't what that question was asking, so I'm sure there are many more examples not mentioned there or at least not really explained there. What reminded me of this older MO question was seeing an MO question "Why is the Laplacian ubiquitous?", since that is an instance of an important notion which has a discrete analgoue. In an answer, it would be interesting to hear about the relationship between the continuous and discrete versions of the notion, if possible, and references could also be helpful. Thanks! share|improve this question I don't actually know if this is true, but I would guess that the Fourier transform was discovered before the discrete Fourier transform. –  Qiaochu Yuan Aug 19 '12 at 16:50 That's a great example -- I actually wasn't so concerned about chronology, rather was interested in understanding better the interesting relationships between the discrete and continuous versions of things and thought it might be nice if there were a list of examples. You could certainly write an answer about the Fourier transform. –  Patricia Hersh Aug 19 '12 at 17:01 Patricia, since you are asking for examples: please make this community wiki. There is no "right" answer here. –  Vidit Nanda Aug 19 '12 at 17:53 @Vel: I thought some people could be more motivated to go to the trouble to write a good answer if I didn't make this CW -- there have been other questions like this that aren't CW, especially when a good answer could involve substantial mathematics. So I wanted to see if I could hold off on that, at least for awhile. –  Patricia Hersh Aug 19 '12 at 18:20 Some people do not answer big list questions until they are CW. I have flagged the mods because only they can make existing answers CW. –  Benjamin Steinberg Aug 20 '12 at 0:08 12 Answers 12 Negative curvature of Riemannian manifolds, originally a differentiable theory, has been discretized in several phases. The first phase might have been Dehn's algorithm for the word problem in a surface group; I am guessing that at the time this might have seemed more an "application" of hyperbolic geometry than a discretization of it. But then comes the next big phase, the development of small cancellation theory, in which Dehn's algorithm (and related tools) were applied to many abstractly defined groups. The culminating phase was the development (by Gromov among others) of the theory of hyperbolic groups. share|improve this answer I'll give one answer to get things started: discrete Morse theory. A discrete Morse function assigns a real number to each face in a simplicial complex or more generally to each cell in a regular CW complex. (With care, one can also work with non-regular CW complexes.) While in Morse theory there are critical points, each having an index, the discrete Morse theoretic analogue is a critical cell, with the dimension of a critical cell playing the role of index of a critical point. The Morse inequalities still hold, and one can still calculate Euler characteristic as alternating sum of Morse numbers (i.e. alternating sum of the number of critical cells of each dimension). The original regular CW complex will be (simple) homotopy equivalent to a CW complex having fewer cells (unless all cells are critical), namely a CW complex whose cells are indexed by the critical cells. This analogue with Morse theory was established by Robin Forman in his paper "Morse theory for cell complexes", Adv. Math., 134 (1998), no. 1, 90-145. Another nice reference is his paper "A user's guide to discrete Morse theory". The idea has proven quite useful in the study of various simplicial complexes e.g. in combinatorics, and the idea appeared independently in work of Ken Brown under the name "collapsing scheme". share|improve this answer You might add as a reference the paper of Bestvina and Brady, Morse theory and finiteness properties of groups. Invent. Math. 129 (1997), no. 3, 445–470. –  Lee Mosher Aug 19 '12 at 16:02 Thanks! My description of the analogy was for Forman's notion, but it's a good idea to add this reference. –  Patricia Hersh Aug 19 '12 at 17:46 We have to be careful with the simple homotopy claim. Given $f:X \to \mathbb{R}$ and setting $X^a = \lbrace \sigma \in X~|~f(\sigma) < a\rbrace $ there is a simple homotopy equivalence between $X^a$ and $X^b$ provided there are no critical values in $(a,b)$. On the other hand, when we cross a critical value, then we only have homotopy equivalence coming from the attaching map of the boundary of the critical cell: this need not be a simple homotopy equivalence. –  Vidit Nanda Aug 19 '12 at 17:50 @Vel: I think one can also handle the critical cells by using some anticollapses, but I don't know a reference for this. Idea: once one removes critical cell $C$, one can see what elementary collapses to do to get down to $X^a$ and how they would carry the boundary of $C$ to have new attaching map $f_{C_a}$. Therefore, we first do an anticollapse by adding in cell $C'$ with attaching map $f_{C_a}$ along with a cell $D$ of dimension one higher that has $C'$ as a free face and also attaches to the cells $\sigma $ with $a\le f(\sigma ) \le b$. Now collapse away $C,D$ and the noncritical cells. –  Patricia Hersh Aug 19 '12 at 18:49 Vel, my recollection is that Robin Forman made this statement various times at conferences that a discrete Morse function implies a simple homotopy equivalence, so probably that's how it became folklore. I think it's true, and hopefully the argument I gave above explains why. –  Patricia Hersh Aug 20 '12 at 12:27 A simplicial set is a discrete analogue (and in many ways a generalizaion) of a topological space, giving rise to discrete notions of fibration, homotopy groups, etc etc. share|improve this answer Trees (in particular, homogeneous) are discrete analogues of Cartan-Hadamard manifolds (in particular, of simply connected manifolds of constant negative curvature). Although dealing with trees is much easier technically, they were considered much later: function theory, harmonic analysis, automorphism groups, random walks vs Brownian motion, representation theory etc. One has to admit that mostly (not always, though) it was done by direct translation (sometimes almost verbatim) from continuous into discrete language. Another example is provided by the discrete potential theory (sometimes interpreted as the theory of resistive electrical networks). Here, once again, in spite of being much more elementary it was developed significantly later than the continuous theory. I would say that in the latter case the discrete theory is more independent than in the case of geometry on trees. Yet another example (where the discrete part is much more original) is buildings vs Riemannian symmetric spaces. share|improve this answer One of my favorite examples of this is the "q-calculus", which is like a multiplicative version of the classical subject of calculus of finite differences. One can, using suitably defined "q" versions of the derivative, integral, and so on, recover analogues of most of the usual theorems in calculus. But what's more interesting is that this all ties in with noncommutative geometry and the field with one element (see John Baez's This Weeks Finds in Mathematical Physics). share|improve this answer That last sentence needs some justification –  Yemon Choi Aug 19 '12 at 21:25 You're right. I added a reference. –  Aleksandar Bahat Aug 19 '12 at 22:02 Finite graphs are a rich source of discrete analogues (I will be partially repeating the OP and some other answers here): • The Laplacian on a finite graph is a discrete analogue of the Laplacian on a Riemannian manifold. In particular, it is possible to formulate the heat equation, the wave equation, and the Schrödinger equation on a finite graph. There are actually two Laplacians, a vertex Laplacian and an edge Laplacian, which give a discrete analogue of Hodge theory. • The Ihara zeta function of a finite graph is a discrete analogue of the Selberg zeta function of a Riemannian manifold. A regular graph satisfies an analogue of the Riemann hypothesis if and only if it is a Ramanujan graph. There is also an analogue of the Selberg trace formula in this setting; Terras has written extensively about this kind of thing. • The Picard group (or critical group, or sandpile group) of a finite graph is a discrete analogue of the Picard group of an algebraic curve. More generally a lot of the theory of algebraic curves can be transported to this setting, e.g. the Riemann-Roch theorem. (Finite graphs are also a rich source of other kinds of analogues; for example the Ihara zeta function is also analogous to the Dedekind zeta function of a number field, with coverings of graphs analogous to extensions of number fields and the Picard group analogous to the class group. There is even an analogue of the analytic class number formula in this setting although I have forgotten the reference.) share|improve this answer Thanks! Great answer! –  Patricia Hersh Aug 24 '12 at 13:47 I would consider symbolic dynamics as a discrete version of usual dynamical systems. This may depend on whether you view infinite words on finite alphabets as discrete. share|improve this answer Discrete difference equations generalize differential equations. In a similar spirit, divided difference operators generalize partial differentiation operators. Though such operators go back to Newton, there has a been a resurgence of interest in them since the work of Lascoux and Schutzenberger on Schubert polynomials. While partial differentiation operators satisfy commutativity relations $\partial_x \partial_y = \partial_y \partial_x$, the divided difference operators satisfy the nilHecke relations. This gives the discrete operators a certain richness that is not present in the continuous operators. share|improve this answer As one reference on this topic, I really like the paper: Sergey Fomin and Richard Stanley, "Schubert polynomials and the nilCoxeter algebra", Adv. Math. 103 (1994), 196--207. –  Patricia Hersh Aug 20 '12 at 14:16 A more or less elementary example: Sperner's lemma is a discrete/combinatorial analog to the Brouwer fixed point theorem. Furthermore, its one-dimensional case is a discrete analog to the intermediate value theorem. share|improve this answer Continuous-time random walks on graphs are in some sense a discrete analogue of diffusions on a Riemannian manifold (of course, the reverse can be argued, but I think that diffusions play a more central role in modern probability theory). Of course, the most important diffusion is Brownian motion, i.e., the Markov process associated with the Laplace-Beltrami operator. From my perspective, the natural analogue of Brownian motion is the operator $\mathcal{L}_V$ given by (we use unweighted graphs for simplicity) \begin{equation*} (\mathcal{L}_Vf)(x) := \sum\_{y\sim x}(f(y)-f(x)). \end{equation*} A more 'common' choice might be the rate-1 continuous time random walk with generator $\mathcal{L}_C$ given by \begin{equation*} (\mathcal{L}_Cf)(x) := \frac{1}{\deg(x)}\sum\_{y\sim x}(f(y)-f(x)). \end{equation*} However, this choice of generator has several 'bad' properties if you want to view it as an analogue of Brownian motion -- for example, the generator is always bounded on $L^2(\deg)$, it cannot have discrete spectrum, and the associated random walk cannot explode; in contrast, the operator $\mathcal{L}_V$ may be unbounded, and discrete spectrum and explosiveness are possible. Once you have this discrete (space) analogue of Brownian motion on a Riemannian manifold, a natural question is to ask what the discrete analogue of the Riemannian metric should be for this process. It is not too hard to find examples that show that the graph metric is a bad analogue, since the Riemannian metric governs heat flow (in some sense) on a Riemannian manifold (see e.g. here), but Gaussian heat kernel estimates do not hold for the random walk associated with $\mathcal{L}_V$ if you take the manifold heat kernel estimates and replace the distance function with the graph metric. A reasonable analogue has been formulated recently, see e.g. here and here. share|improve this answer If you have a discrete data structure (say a tree), and you want to make a small change to it (i.e. insert a node at some location), it turns out that the original datatype can be described as a function, and the "small change" datatype is the derivative of the original datatype's function, that you can calculate with the usual rules for derivatives. The original article is here: and it's been extended in various ways since then. Amazing stuff. share|improve this answer Your Answer
771458a15d748820
Take the 2-minute tour × All of us have probably been exposed to questions such as: "What are the applications of group theory...". This is not the subject of this MO question. Here is a little newspaper article that I found inspiring: Madam, – In response to Marc Morgan’s question, “Does mathematics have any practical value?” (November 8th), I wish to respond as follows. Apart from its direct applications to electrical circuits and machinery, electronics (including circuit design and computer hardware), computer software (including cryptography for internet transaction security, business software, anti-virus software and games), telephones, mobile phones, fax machines, radio and television broadcasting systems, antenna design, computer game consoles, hand-held devices such as iPods, architecture and construction, automobile design and fabrication, space travel, GPS systems, radar, X-ray machines, medical scanners, particle research, meteorology, satellites, all of physics and much of chemistry, the answer is probably “No”. – Yours, etc, The Irish Times - Wednesday, November 10, 2010 The above article article seems to provide an ideal source of solutions to a perennial problem: How to tell something interesting about math to non-mathematicians, without losing your audience? However, I am embarrassed to admit that I have no idea what kind of math gets used for antenna designs, computer game consoles, GPS systems, etc. I would like to have a list applications of math, from the point of view of the applications. To make sure that each answer is sufficiently structured and developed, I shall impose some restrictions on their format. Each answer should contain the following three parts, roughly of the same size: • Start by the description of a practical problem that any layman can understand. • Then I would appreciate to have an explanation of why it is difficult to solve without mathematical tools. • Finally, there should be a little explanation of the kind of math that gets used in the solution. ♦♦  My ultimate goal is to have a nice collection of examples of applications of mathematics, for the purpose of casual discussions with non-mathematicians. ♦♦ As usual with community-wiki questions: one answer per post. share|improve this question Related: mathoverflow.net/questions/2556/… –  Qiaochu Yuan Feb 24 '11 at 18:55 I think you'd need a list of topics that you'd like to know more about, lest this develops in a whole new encyclopedia. I just picked three examples from the newpaper piece you cited. –  Tim van Beek Feb 25 '11 at 3:05 @Tim: The article I cited provides such a list. I'd already be quite happy if an example could be provided for each of the items in there. –  André Henriques Feb 25 '11 at 10:51 @André: Ok, is there an item on the list that has not been addressed and that your are particularly interested in? I feel like I could write books about every one :-) –  Tim van Beek Feb 27 '11 at 10:49 @Tim: Actually yes: antenna design. –  André Henriques Feb 27 '11 at 15:02 10 Answers 10 up vote 48 down vote accepted Sending a man to the Moon (and back). Hilbert once remarked half-jokingly that catching a fly on the Moon would be the most important technological achievement. "Why? "Because the auxiliary technical problems which would have to be solved for such a result to be achieved imply the solution of almost all the material difficulties of mankind." (Quoted from Hilbert-Courant by Constance Reid, Springer, 1986, p. 92). The task obviously required solving plenty of scientific and technological problems. But the key breakthrough that made it all possible was Richard Arenstorf's discovery of a stable 8-shaped orbit between the Earth and the Moon. This involved the development of a numerical algorithm for solving the restricted three-body problem which is just a special non-linear second order ODE (see also my answer to the previous MO question). Another orbit, also mapped by Arenstorf, was later used in the dramatic rescue of the Apollo 13 crew. share|improve this answer One typical way that GPS is invoked as an application of mathematics is through the use of general relativity. Most people have a rough idea of what the GPS system does: there are some (27) satellites flying in the sky, and a GPS device on the surface of the earth determines its position by radio communication with the satellites. It is also pretty clear that this is a hard problem to solve, with or without mathematics. The basic idea is that if your GPS device measures its distance between 3 different satellites, then it knows that it lies on three level sets which must intersect at a point. This is the standard idea of triangulation. Of course measuring distance is hard to do, and relativity comes into play in many different, nontrivial ways, but there is one way in particular that is interesting and easy to explain. If one uses the euclidean metric to determine the distance (so, straight lines) from the GPS to the satellite, then it will be impossible to determine the location on the earth to a high degree of accuracy. So instead the GPS system uses the kerr metric, that is the lorentz metric that models spacetime outside of a spherically symmetric, rotating body. Naturally this metric gives a different, more accurate distance between the observer on earth and the satellite. The thing that is surprising to people is that the switch from euclidean to kerr is required to get really accurate gps readings. In other words, without relativity you might not be able to use that iphone app to find your car in the grocery store parking lot. People are often surprised and interested to learn that the differences between relativity and newtonian gravity really are observable. Other standard examples are the precession of the perihelion of mercury (which was a famous unsolved problem before the introduction of GR) and the demonstration that light rays do not travel along straight lines by photographing the sun during an eclipse. This last observation demonstrated, for instance, that the metric on the universe is not the trivial flat one. share|improve this answer Another place where relativity is relevant to GPS is time dilation. Fundamentally GPS computes distances by calculating time differences, so every satellite contains an atomic clock. Before they launch them they calibrate the clocks, but they have to be detuned by an amount that takes into account both the SR and GR time dilation effects in order to be accurate when they get into orbit. –  hobbs Feb 9 '14 at 3:10 A particularly striking application to physics and chemistry is explained in Singer's book Linearity, symmetry, and prediction in the hydrogen atom. The practical problem, in the large, is easy to state: what is the stuff around us made of, and why does it react with other stuff the way it does? More precisely, what explains the structure of the periodic table? There is no a priori reason that the elements ought to naturally arrange themselves in rows of size $2, 8, 8, 18, 18, ...$ with repeating chemical properties. This periodic structure profoundly shapes the nature of the world around us and so ought to be well worth trying to understand on a deeper level. Physically, the answer has to do with the way that electrons arrange themselves around a nucleus, one of the classic examples of the breakdown of classical mechanics. The Bohr model posits that electrons are arranged in discrete orbitals $n = 1, 2, 3, ... $ with energy levels proportional to $- \frac{1}{n^2}$ such that the $n^{th}$ energy level admits at most $2n^2$ electrons. This behavior $- \frac{1}{n^2}$ can be empirically deduced by an examination of atomic spectra but the Bohr model still does not provide a conceptual explanation of it. That explanation comes from full-blown quantum mechanics, which already requires a fair amount of nontrivial mathematics. For our purposes quantum mechanics will be described by a Hilbert space $K = L^2(X)$ where $X$ is the classical phase space (e.g. $\mathbb{R}^3$) and a self-adjoint operator $H : K \to K$, the Hamiltonian, which will describe the evolution of states via the Schrödinger equation. The simplest case is that of an electron orbiting a single proton, in which case one can explicitly write down the potential. In this case the Schrödinger equation can be solved fairly explicitly and the answer tells you what electron orbitals look like, but it turns out that one can do much better: it is possible to predict the solutions and their properties using representation theory. To start with, the Coulomb potential has a spherical symmetry, so this endows $K$ with the structure of a unitary representation of $\text{SO}(3)$. By identifying two wave functions together if they lie in the same representation we can hope to have a physical classification of the possible states of an electron; the idea is that physical quantities we care about should be invariant under physical symmetries (e.g. mass, energy, charge). The action of $\text{SO}(3)$ breaks up the space of possible states based on their angular momentum (Noether's theorem). The corresponding representations have dimensions $1, 3, 5, 7, ...$ and indeed we find that we can decompose the number of elements in each row of the periodic table as $$2 = 1 + 1$$ $$8 = 1 + 1 + 3 + 3$$ $$18 = 1 + 1 + 3 + 3 + 5 + 5$$ corresponding to the possible angular momentum values allowed at each energy level. Of course these symmetry considerations apply to every spherically symmetric system so the $\text{SO}(3)$ symmetry cannot tell us anything more specific. But it turns out there is even more symmetry to exploit. First of all, remarkably enough the $\text{SO}(3)$ symmetry extends to an $\text{SO}(4)$ symmetry. (I do not really know a conceptual explanation of this, unfortunately; I have a half-baked one which I'm not sure is valid.) The irreducible representations of $\text{SO}(4)$ occurring here are precisely the ones of dimensions $1, 1 + 3, 1 + 3 + 5, ...$ and they break up into irreducible $\text{SO}(3)$ representations in exactly the right way to account for the above pattern up to a factor of $2$. Second of all, the factor of $2$ is accounted for by an additional action of $\text{SU}(2)$ coming from electron spin (the thing that makes MRI machines work). So representation theory provides a strikingly elegant answer to the question of how the periodic table is arranged (if one accepts that a single proton is a good approximation to a general atomic nucleus). Of course there is much more to say here about the relation between representation theory and physics and chemistry, but I am not the one to ask... share|improve this answer My half-baked explanation for the SO(4) symmetry is this: the wave functions we are interested are localized near the origin, so their Fourier transforms are smeared out in momentum space. Smeared-out functions on R^3 look like functions on S^3, and the proton in R^3 is completely smeared out in S^3, so... there is an SO(4) symmetry in momentum space. Or something like that. –  Qiaochu Yuan Feb 25 '11 at 1:10 You could mention almost any physical theory (e.g. General Relativity, anything with PDEs, etc.) - almost all involve some degree of nontrival mathematics. Why QM in particular? I actually think it's a particularly bad example! You say: "...but the Bohr model still does not provide a conceptual explanation of it." But I would say the same applies to Quantum Mechanics! Maybe it gives correct formulae and predictions for (currently known) experimental data, but it's TOTALLY crazy and strange. Maybe in 100-200 years from now, it will have been replaced by something totally different! –  Zen Harper Feb 25 '11 at 5:34 ...just to make it clear, I'm obviously not suggesting that QM is not useful or not mathematical! I just think there are many other things from mathematical physics which are much less crazy and counterintuitive, and so would be much better examples for conversation with nonmathematicians. To me, QM looks like a totally incorrect theory which only gives the correct formulae by chance. A proper explanation is still waiting to be found. –  Zen Harper Feb 25 '11 at 5:40 @Qiaochu: That's fine as a basic example of the use of representation theory. But note that much more subtle aspects of how the Periodic Table is organized, which for a long time were only experimentally observed, have just recently been described mathematically: by detailed studies of asymptotic properties of the Schrödinger equation combined with some rep theoretical aspects, see <a href="mpip-mainz.mpg.de/theory/events/namet2010/…;. –  Thomas Sauvaget Feb 25 '11 at 10:07 @Qiaochu: Sorry the link should be mpip-mainz.mpg.de/theory/events/namet2010/… Also, the SO(4) symmetry is a property of the classical 2-body problem, which is thus inherited by the quantum one: for a natural conceptual explanation you can have a look at this wikipedia section en.wikipedia.org/wiki/… (and that whole wikipedia page for more details on the corresponding additional conserved quantity). –  Thomas Sauvaget Feb 25 '11 at 10:16 Here are some examples to quote my favorite one: In 1998, mathematics was suddenly in the news. Thomas Hales of the University of Pittsburgh, Pennsylvania, had proved the Kepler conjecture, showing that the way greengrocers stack oranges is the most efficient way to pack spheres. A problem that had been open since 1611 was finally solved! On the television a greengrocer said: “I think that it's a waste of time and taxpayers' money.” I have been mentally arguing with that greengrocer ever since: today the mathematics of sphere packing enables modern communication, being at the heart of the study of channel coding and error-correction codes. In 1611, Johannes Kepler suggested that the greengrocer's stacking was the most efficient, but he was not able to give a proof. It turned out to be a very difficult problem. Even the simpler question of the best way to pack circles was only proved in 1940 by László Fejes Tóth. Also in the seventeenth century, Isaac Newton and David Gregory argued over the kissing problem: how many spheres can touch a given sphere with no overlaps? In two dimensions it is easy to prove that the answer is 6. Newton thought that 12 was the maximum in 3 dimensions. It is, but only in 1953 did Kurt Schütte and Bartel van der Waerden give a proof. The kissing number in 4 dimensions was proved to be 24 by Oleg Musin in 2003. In 5 dimensions we can say only that it lies between 40 and 44. Yet we do know that the answer in 8 dimensions is 240, proved back in 1979 by Andrew Odlyzko of the University of Minnesota, Minneapolis. The same paper had an even stranger result: the answer in 24 dimensions is 196,560. These proofs are simpler than the result for three dimensions, and relate to two incredibly dense packings of spheres, called the E8 lattice in 8-dimensions and the Leech lattice in 24 dimensions. This is all quite magical, but is it useful? In the 1960s an engineer called Gordon Lang believed so. Lang was designing the systems for modems and was busy harvesting all the mathematics he could find. He needed to send a signal over a noisy channel, such as a phone line. The natural way is to choose a collection of tones for signals. But the sound received may not be the same as the one sent. To solve this, he described the sounds by a list of numbers. It was then simple to find which of the signals that might have been sent was closest to the signal received. The signals can then be considered as spheres, with wiggle room for noise. To maximize the information that can be sent, these 'spheres' must be packed as tightly as possible. In the 1970s, Lang developed a modem with 8-dimensional signals, using E8 packing. This helped to open up the Internet, as data could be sent over the phone, instead of relying on specifically designed cables. Not everyone was thrilled. Donald Coxeter, who had helped Lang understand the mathematics, said he was “appalled that his beautiful theories had been sullied in this way”. share|improve this answer computerized tomography In computerized tomograhy one meaures X-ray images of a body from different angles. Each X-ray image roughly corrsponds to a projection of the density distribution along a certain direction. To obtain the full density distribution inside the body one has to invert the Radon transform. This is an interesting problem from integral geometry which is also challenging concerning the numerical implementation, since the inverse is known the be discontinuous and hence, regularization techniques has to be employed. Another interesting aspect of this story is that the mathematical problem of the inversion of the Radon transform was done aound 1917 (by Johan Radon itself) while this was totally unknown to the inventors of computerized tomography as it is used today. share|improve this answer example: Information send over the internet needs to be secure such that only the sender and the recipient can understand and use it. Example: Man in the middle attack on a bank transaction: You send an order to your bank to pay 100 Dollars to Mr. X. I intercept this transmission and change the order to your bank to make them send me 100 000 Dollars instead. Since every information send over the internet passes through a lot of different computers (gateways), all I need to intercept your message is access to one of those computers. Thousands of network administrators do have such an access (this is grossly simplified of course). In order to secure the information, me and my bank need to know an algorithm for cryptography. Commonly used are algorithms using public/private key pairs. These consist of functions such that 1. the bank publishes a public key k, 2. I can apply a function f mapping the information inf I would like to send, using the public key k, to an encrypted message f(inf, k). 3. The whole punchline is that the inverse function can only be computed by knowing the private key, which only my bank knows. So only my bank can compute the information inf knowing f(inf, k). Commonly used algorithms are based on the assumption that there is no efficient algorithm to factorize large numbers, i.e. compute the prime factors of a given large number. The validity has not been proven. So you can a) get famous by proving this assumption, b) get either famous (and provoke a collapse of internet banking) or insanely rich by finding an algorithm that computes prime factors efficiently, c) get famous and rich by finding an efficient algorithm for public/private key encryption that is efficient and provable safe. share|improve this answer I don't want to be the person who makes most of the current coding algorithms useless ... this seems to be quite dangerous. –  Martin Brandenburg Feb 25 '11 at 9:15 It would be dangerous if someone developed a cracking algorithm to use this for his own good instead of publishing it. –  Tim van Beek Feb 27 '11 at 10:48 computer game consoles Many computer games today display some sort of 3D real time graphics. There are two important aspects: a) the display of a 2D projection of a 3D object model needs involved algorithms that calculate what is visible from the viewpoint of the observer, how objects look like from that perspective and calculate shading and light effects on colors. These algorithms need concepts from 3D geometry (linear algebra, vectors, areas, projection operators etc.). Many computer science departments have classes for the involved mathematics. b) the animation effects of many games are calculated by numerical solutions of partial differential equations describing the physical motion of solid bodies and fluids. In order to animate fluids like water, for example, computer games use finite element approximations to the Navier-Stokes equations. Needless to say, this is a very active area of current research. (Computer consoles are actually used in research involving computational fluid dynamics because they are cheap, easy to program and very powerful.) The last part is also true for automobile design and fabrication: Car companies need to test new car designs for example for mechanical problems: Are there any parts that will make noise once you drive faster than 50 km/h? This is tested with software that simulates the mechanical parts of the proposed car design using finite element approximations to equations of solid state mechanics. The same technique is also used to simulate crash tests. The design of the car body is done via CAD (computer aided design) software. This software uses approximation and interpolation algorithms to calculate external surfaces that are as smooth as possible while satisfying boundary conditions that are specified by the designer. These approximations are done e.g. by spline interpolation. Numerical approximations of computational fluid dynamics are also used to simulate tests in the wind tunnel. This actually saves a lot of money. (It is also the reason why modern cars all kinda look alike.) share|improve this answer circuit design and computer hardware Example: a robotic arm has to create a complicated electronic circuit by putting a conducting material on a non-conducting base. The robotic arm has to traverse the whole graph that makes up the circuit at least once. In order to reduce the time the roboter needs to create one electronic circuit, the way it has to traverse needs to be minimized, that is one needs a good approximate solution to the traveling salesman problem. I know of examples where improved heuristic approximation algorithms have increased the output by several percents (the student doing the math thesis on this was rewarded by the company producing these circuits with several hundred thousands of dollars. No, it wasn't me.). share|improve this answer (Dredged up from the murky past...) Designing control systems usually involves building a logic circuit that has several inputs and one or two outputs. Sometimes states are involved (sequencing of traffic lights, coin collectors for vending machines), sometimes not. In designing such control logic, many equations get written down which represent things like "If these three switches are off and these others are on, flip this switch over here". Once one has the equations written down, (often as a Boolean function, a map from {0,1}^n to {0,1}) one has to build the circuit implementing these equations. Often times, the medium for implementation is a gate array, which may be a field of NAND logic gates that can be wired together, or a programmable logic device, which is like two or more gate arrays, some with ANDS, some with ORS, some NOT gates, flip-flops which are like little memory stores, and so on. The major question is: are there enough gates on the device to build all the logic represented by the equations? To this end, computer programs called logic minimizers are used. They have certain definite rules (related to manipulation terms in Boolean logic) and certain heuristics (guidelines and methods for following the guidelines) to follow in order to minimize the number of, say, AND and OR gates used in representing the equations. The mathematics of representing any Boolean function as a series of AND and OR gates, and finding equivalent representations, has been developed and used since George Boole set down the algebraic form of what is now called Boolean Logic. Computer Science, abstract algebra, clone theory, all have played and continue to play an essential role in solving instances of this kind of problem. The fact that it is not completely solved is related to one of the Millenium Prize problems (P-NP) . Gerhard "Ask Me About PLD Chips" Paseman, 2011.02.24 share|improve this answer The error-correction required for cell phones and 3G and 4G devices to work is mathematics! share|improve this answer Your Answer
77c1f9cce3dccc7e
C++QEDElements  v2 Milestone 10 a framework for simulating open quantum dynamics – generic elements Table of supported general-purpose elements This is a list of general-purpose free and interaction elements (residing in CPPQEDelements), that are supported in the framework, meaning that they will always remain an integral part of the framework’s distribution. The supported scripts (residing in CPPQEDscripts) rely on them so that they are tested by the testsuite(s). The exact list of these elements does have some historical determination and the list may be extended and supplemented in the future. For how to add custom elements to the framework and write custom scripts using them cf. the directories CustomElementsExample and CustomScriptsExample. Reside in CPPQEDelements/frees A single harmonic-oscillator mode that might be driven and interact with a reservoir of possibly finite temperature. Notation: $z\equiv\kappa(2n_\text{Th}+1)-i\delta$. Class name Hamiltonian $U$ Liouvillean Displayed characteristics Mode n/a $e^{i\delta\,t\,a^\dagger a}$ n/a $\avr{a^\dagger a},\;\text{var}\lp a^\dagger a\rp,\;\real{a},\;\imag{a},$ ModeSch $-\delta\,a^\dagger a$ n/a PumpedMode $i\lp\eta a^\dagger\,e^{-i\delta\,t}-\hermConj\rp$ = Mode PumpedModeSch $-\delta\,a^\dagger a+i\lp\eta a^\dagger-\hermConj\rp$ n/a LossyMode n/a $e^{-z\,t\,a^\dagger a}$ $2\kappa\lp(n_\text{Th}+1)\,a\rho a^\dagger+n_\text{Th}\,a^\dagger\rho a\rp$ LossyModeUIP $-i\kappa(2n_\text{Th}+1)\,a^\dagger a$ = Mode LossyModeSch $-iz\,a^\dagger a$ n/a PumpedLossyMode $i\lp\eta a^\dagger\,e^{z\,t}-\eta^* a\,e^{-z\,t}\rp$ = LossyMode PumpedLossyModeUIP LossyModeUIP + PumpedMode = Mode PumpedLossyModeSch LossyModeUIP + PumpedModeSch n/a Qbit has the same versions as Mode, but finite temperature is not yet implemented in this case. Notation: $z\equiv\gamma-i\delta.$ Hamiltonian $U$ Liouvillean Displayed characteristics Qbit$-iz\,\sigma^\dagger\sigma+i\lp\eta\sigma^\dagger-\hermConj\rp$ depends on used picture $2\gamma\,\sigma\rho\sigma^\dagger$ $\rho_{00},\;\rho_{11},\;\real{\rho_{10}},\;\imag{\rho_{10}}$ The Qbit algebra is identical to a Mode with cutoff=2, so that we can reuse much code from Mode when implementing Qbit Spin is characterized by a fixed magnitude $s,$ whereupon its quantum numbers are $m=-s\dots s,$ so that its dimension is $2s+1$. In the framework, all state vectors are indexed starting with 0, so that the element $\Psi(0)$ corresponds to the basis vector $\ket{-s}$. Notation: $z\equiv\gamma-i\delta.$ Class name Hamiltonian $U$ Liouvillean Displayed characteristics Spin n/a $e^{-z\,t\,S_z}$ n/a $\avr{S_z},\;\avr{S_z^2},\;\real{S_+},\;\imag{S_+},$ LossySpin n/a $2\gamma\,S_-\rho S_+$ SpinSch $-z\,\boldsymbol{\theta}\cdot\mathbf{S}$ n/a n/a $\boldsymbol{\theta}\cdot\mathbf{S}$ is in general not diagonal in the eigenbasis $\ket{m}$ of $S_z,$ so that it would not be convenient for defining an interaction picture. These elements could be more accurately called a 1D motional degree of freedom. The basic Hamiltonian $H=p^2/(2\mu)$ is most conveniently implemented in momentum basis. Discrete k-basis amounts to finite quantization length in x-space. Our choice of units is such that the smallest momentum is $\Delta k=1$, so that the quantisation length in x-space is $2\pi$. The use of discrete k-basis entails periodic boundary condition in x-basis. Spatial resolution is an integer power of 2 to be able to perform radix-2 FFT. Notation: recoil frequency $\omrec\equiv\hbar\,\Delta k^2/(2\mu)=1/(2\mu),$ k-operator $k\equiv p/(\hbar\,\Delta k)=p$. Hence the basic Hamiltonian: $H=\omrec k^2.$ The Particle elements have conservative dynamics. Class name Hamiltonian $U$ Displayed characteristics Particle n/a $e^{-i\omrec t\,k^2}$ $\avr{k},\;\text{var}(k),\;\avr{x},\;\text{dev}(x)$ ParticleSch $\omrec k^2$ n/a PumpedParticle $\vclass\abs{m\Int(x)}^2$ = Particle PumpedParticleSch $\omrec k^2+\vclass\abs{m(x)}^2$ n/a Here, $m(x)$ is the mode function of the pump, which can be $\sin(Kx),\;\cos(Kx),\;e^{\pm iKx}$ with arbitrary integer $K$. Simulation of moving particles is inherently hard, since the Schrödinger equation is a partial differential equation, and we inevitably have to deal with both position and momentum representations, which are linked by Fourier transformation. In quantum optics, however, the particles are mostly moving in potentials created by electromagnetic fields, mainly standing and running waves. In this case we can stay in momentum space during the whole time evolution. A strange consequence is that in numerical physics the harmonic oscillator seems to be hard, while the cosine potential is easy. See also The MultiLevel bundle Reside in CPPQEDelements/interactions All the operators are automatically taken in interaction picture, if the underlying free element is in interaction picture. Class name Free elements Hamiltonian Displayed characteristics JaynesCummings (Qbit / Spin) – Mode $i\lp g^*\sigma a^\dagger-\hermConj\rp$ n/a GeneralDicke ModeSpin $\displaystyle u\,a^\dagger a\lp S_z+\frac s2\rp+y\lp a+a^\dagger\rp S_x$ n/a NX_CoupledModes ModeMode $u\,a^\dagger a\lp b+b^\dagger\rp$ n/a QbitModeCorrelations QbitMode n/a $\real{\avr{\sigma a^\dagger}},\;\imag{\avr{\sigma a^\dagger}},\;\real{\avr{\sigma a}},\;\imag{\avr{\sigma a}},\;\real{\avr{\sigma_z a}},\;\imag{\avr{\sigma_z a}}$ ModeCorrelations ModeMode n/a covariances of the modes’ quadratures ParticleOrthogonalToCavity ModePumpedParticle $\text{sign}\{U_0\}\sqrt{U_0\vclass}\lp a^\dagger m(x)+\hermConj\rp$ n/a ParticleAlongCavity Mode – (Pumped)Particle $U_0\abs{m(x)}^2 a^\dagger a+\text{sign}\{U_0\}\sqrt{U_0\vclass}\lp a^\dagger m(x)+\hermConj\rp$ n/a ParticleTwoModes ModeModeParticle $\sqrt{U_{01}U_{02}}\lp m_1(x)m_2(x)\,a_1^\dagger a_2+\hermConj\rp$ n/a See also This issue multi-level Jaynes-Cummings See also The MultiLevel bundle
bdb365a5db85c731
Physicist, Startup Founder, Blogger, Dad Wednesday, April 23, 2008 Feynman and Everett A couple of years ago I gave a talk at the Institute for Quantum Information at Caltech about the origin of probability -- i.e., the Born rule -- in many worlds ("no collapse") quantum mechanics. It is often claimed that the Born rule is a consequence of many worlds -- that it can be derived from, and is a prediction of, the no collapse assumption. However, this is only true in a particular (questionable) limit of infinite numbers of degrees of freedom -- it is problematic when only a finite number of degrees of freedom are considered. After the talk I had a long conversation with John Preskill about many worlds, and he pointed out to me that both Feynman and Gell-Mann were strong advocates: they would go so far as to browbeat visitors on the topic. In fact, both claimed to have invented the idea independently of Everett. Today I noticed a fascinating paper on the arXiv posted by H.D. Zeh, one of the developers of the theory of decoherence: Feynman's quantum theory H. D. Zeh (Submitted on 21 Apr 2008) A historically important but little known debate regarding the necessity and meaning of macroscopic superpositions, in particular those containing different gravitational fields, is discussed from a modern perspective. The discussion analyzed by Zeh, concerning whether the gravitational field need be quantized, took place at a relativity meeting at the University of North Carolina in Chapel Hill in 1957. Feynman presents a thought experiment in which a macroscopic mass (source for the gravitational field) is placed in a superposition state. One of the central points is necessarily whether the wavefunction describing the macroscopic system must collapse, and if so exactly when. The discussion sheds some light on Feynman's (early) thoughts on many worlds and his exposure to Everett's ideas, which apparently occurred even before their publication (see below). Nowadays no one doubts that large and complex systems can be placed in superposition states. This capability is at the heart of quantum computing. Nevertheless, few have thought through the implications for the necessity of the "collapse" of the wavefunction describing, e.g., our universe as a whole. I often hear statements like "decoherence solved the problem of wavefunction collapse". I believe that Zeh would agree with me that decoherence is merely the mechanism by which the different Everett worlds lose contact with each other! (And, clearly, this was already understood by Everett to some degree.) Incidentally, if you read the whole paper you can see how confused people -- including Feynman -- were about the nature of irreversibility, and the difference between effective (statistical) irreversibility and true (quantum) irreversibility. Zeh: ... Quantum gravity, which was the subject of the discussion, appears here only as a secondary consequence of the assumed absence of a collapse, while the first one is that "interference" (superpositions) must always be maintained. ... Because of Feynman's last sentence it is remarkable that neither John Wheeler nor Bryce DeWitt, who were probably both in the audience, stood up at this point to mention Everett, whose paper was in press at the time of the conference because of their support [14]. Feynman himself must have known it already, as he refers to Everett's "universal wave function" in Session 9 – see below. ... Toward the end of the conference (in the Closing Session 9), Cecile DeWitt mentioned that there exists another proposal that there is one "universal wave function". This function has already been discussed by Everett, and it might be easier to look for this "universal wave function" than to look for all the propagators. Feynman said that the concept of a "universal wave function" has serious conceptual difficulties. This is so since this function must contain amplitudes for all possible worlds depending on all quantum-mechanical possibilities in the past and thus one is forced to believe in the equal reality [sic!] of an infinity of possible worlds. Well said! Reality is conceptually difficult, and it seems to go beyond what we are able to observe. But he is not ready to draw this ultimate conclusion from the superposition principle that he always defended during the discussion. Why should a superposition not be maintained when it involves an observer? Why “is” there not an amplitude for me (or you) observing this and an amplitude for me (or you) observing that in a quantum measurement – just as it would be required by the Schrödinger equation for a gravitational field? Quantum amplitudes represent more than just probabilities – recall Feynman’s reply to Bondi’s first remark in the quoted discussion. However, in both cases (a gravitational field or an observer) the two macroscopically different states would be irreversibly correlated to different environmental states (possibly including you or me, respectively), and are thus not able to interfere with one another. They form dynamically separate “worlds” in this entangled quantum state. ... Feynman then gave a resume of the conference, adding some "critical comments", from which I here quote only one sentence addressed to mathematical physicists: Feynman: "Don't be so rigorous or you will not succeed." (He explains in detail how he means it.) It is indeed a big question what mathematically rigorous theories can tell us about reality if the axioms they require are not, or not exactly, empirically founded, and in particular if they do not even contain the most general axiom of quantum theory: the superposition principle. It was the important lesson from decoherence theory that this principle holds even where it does not seem to hold. However, many modern field theorists and cosmologists seem to regard quantization as of secondary or merely technical importance (just providing certain "quantum corrections") for their endevours, which are essentially performed by using classical terms (such as classical fields). It is then not surprising that the measurement problem never comes up for them. How can anybody do quantum field theory or cosmology at all nowadays without first stating clearly whether he/she is using Everett’s interpretation or some kind of collapse mechanism (or something even more speculative)? Previous posts on many worlds quantum mechanics. Dave Bacon said... Actually I think Feynman wasn't happy with many worlds... from the original PhysComp conference: "There are all kinds of questions like this, and what I'm trying to do is to get you people who think about computer-simulation possibilities to pay a great deal of attention to this, to digest as well as possible the real answers of quantum mechanics, and see if you can't invent a different point of view than the physicist have had to invent to describe this. In fact the physicists have no good point of view. Somebody mumbled something about a many-world picture, and that man-world picture says the wave function psi is what's real, and damn the torpedos if there are so many variables, N^R. All these different worlds and every arrangement of configurations are all there just like our arrangement of configurations, we just happen to be sitting in this one. It's possible, but I'm not very happy with it." steve said... Well, I'm not happy about it either, but I don't see any other sensible interpretation! Copenhagen is (to me) ill-defined (when does "collapse" happen, exactly?) and the Bayesian "qm is about what the observer knows (information), really" is a much more limited theory than the usual ones -- try answering questions about quantum gravity and quantum cosmology with that perspective! Here is Gell-Mann claiming that Feynman is a many worlder (decoherent historicist, in Gell-Mann and Hartle's language; from a letter in Physics Today, Feb. 1999): steve said... Dieter Zeh responds below. I guess trying to figure out someone else's interpretation of quantum mechanics has a significant intrinsic uncertainty! Dear Professor Hsu, Thank you for your information. I can confirm what you say in your sentence that starts with "I believe that Zeh would agree ..." (in your blog). However, I am a bit surprised about what you say in your second paragraph - especially after Feynman's reaction in the Chapel Hill discussion. I talked to Murray Gell-Mann on several occasions (unfortunately not with Feynman), but I think that he did not interpret Everett quite correctly. He did not particularly like the wave function (he used density matrices - indicating that they or the wave function were just tools for him). So he needed operators and projections for their interpretation (to form histories, which are NOT branching wave functions but discrete events). When he claimed that he and Hartle independently discovered Everett, he simply meant that they do not use a collapse. Occasionally they spoke of their theory as "post-Everett" quantum mechanics. I never quite understood it, but Robert Griffith once asked me not he quote his papers any more, since they "have nothing to do with our decoherence approach". So I wonder what Gell-Mann may have said when Feynman agreed with him (according to the comment by "Steve"). Best regards, Dieter Zeh P.S.: Perhaps I should have written this in your blog. Quercus said... It's all incoherence theory to me. Dr. Hsu, could you just sum up in two or three basic sentences your current understanding of the physical nature of the universe.Thanks. Is life really just a dream? Carson Chow said... Hi Steve, It seems to me that decoherence and many worlds (modulo your finiteness corrections) seems to explain Born probabilities pretty reasonably. Is it the use of the phrase "many worlds" that throws people off? Can I summarize the idea to: there is a huge "universe wave function" that is unitarily evolving. However, with decoherence and the central limit theorem you get Born QM? Sounds more reasonable than collapse to me. What exactly bothers people? What am I missing? steve said... Quercus: when it comes to interpretations of qm, no one really knows what the ultimate answers are! I think people are unhappy with the "other branches (worlds)". But, often such people have not thought through the more conventional approaches (e.g. Copenhagen) thoroughly enough to realize they are not just unpalatable, but even logically incomplete. (See the Weinberg excerpt on one of my linked blog pages.) I believe that the no collapse interpretation is logically complete, and its main problem is the Born rule (the existence of the other branches does not bother me). One has to accept that there are many more branches where physicists have not seen empirical evidence for the Born rule than there are branches like ours where it seems to work. It turns out that the "maverick" branches all have small norm, but there is nothing in the standard formulation which says we should ignore them. Why, then do we happen to live on a non-maverick branch? Zeh would say we just have to assume this a priori. I might hope that there is some dynamical reason that small norm branches somehow go away... Some critics of no collapse claim there is a "basis problem", but I believe that decoherence + the assumption of local interactions solves this problem. Everett simply assumed the primacy of unitary (Schrodinger) evolution, and showed that all the other stuff (the *appearance* of collapse, Born probabilities, in a certain limit) followed as consequences. There seems to be some dispute about what Feynman believed, or whether Gell-Mann and Hartle's "decoherent histories" is the same as Griffith's or Everett's formulation, but it does seem that all would agree that the Copenhagen collapse of the wavefunction is unncessary, though its removal then implies the existence of other branches. David said... Just by the by, I studied physics as an undergrad (MS in OR subsequently). It never ceases to amaze me that the questions that physics-type people ruminate on extensively get ignored by thinkers in other fields. Yet they end up being fundamental to the formation of those problems. Spencer Hargiss said... This comment has been removed by the author. Spencer Hargiss said... I'm a bit confused. I think I'm misunderstanding the relationship between entanglement and decoherence. I find myself thinking that entanglement would prevent branching of multiple strongly decohered universes on a macroscopic scale. I understand that according to the MWI, the entire phase space defined by what is possible according to the laws of physics could be said to exist, but what if those laws dictate that the vast majority of possible interactions of each particle in a macroscopic object would result in that particle becoming entangled again with the macroscopic object, and the entire world due to the constant barrage of photons and other particles that all macroscopic objects have to sustain. Why wouldn't this result in a sort of feedback cycle of quantum inbreeding (in-tangling) preventing any macroscopic splitting of worlds, just as in evolution though inbreeding could be said to cause speciation, none will occur unless a small (quantum?) subset of the population stops breeding (entangling genes) with the rest. What have I got wrong? Blog Archive
c87d32864b1a9e83
Skip to content Open Access Models in biology: ‘accurate descriptions of our pathetic thinking’ BMC Biology201412:29 Received: 3 February 2014 Published: 30 April 2014 In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. Mathematical modelPredictive modelFundamental physical lawsPhenomenologyMembrane-bounded compartmentT-cell receptorSomitogenesis clock The revenge of Erwin Chargaff When I first came to biology from mathematics, I got used to being told that there was no place for mathematics in biology. Being a biological novice, I took these strictures at face value. In retrospect, they proved helpful because the skepticism encouraged me to let go of my mathematical past and to immerse myself in experiments. It was only later, through having to stand up in front of a class of eager students and say something profound (I co-teach Harvard’s introductory graduate course in Systems Biology), that I realized how grievously I had been misled. Biology has some of the finest examples of how quantitative modeling and measurement have been used to unravel the world around us [1, 2]. The idea that such methods would not be used would have seemed bizarre to the biochemist Otto Warburg, the geneticist Thomas Hunt Morgan, the evolutionary biologist R. A. Fisher, the structural biologist Max Perutz, the stem-cell biologists Ernest McCulloch and James Till, the developmental biologist Conrad Waddington, the physiologist Arthur Guyton, the neuroscientists Alan Hodgkin and Andrew Huxley, the immunologist Niels Jerne, the pharmacologist James Black, the epidemiologist Ronald Ross, the ecologist Robert MacArthur and to others more or less well known. Why is it that biologists have such an odd perception of their own discipline? I attribute this to two factors. The first is an important theme in systems biology [3, 4]: the mean may not be representative of the distribution. Otto Warburg is a good example. In the eyes of his contemporaries, Warburg was an accomplished theorist: ‘to develop the mathematical analysis of the measurements required very exceptional experimental and theoretical skill’ [5]. Once Warburg had opened the door, however, it became easy for those who followed him to avoid acquiring the same skills. Of Warburg’s three assistants who won Nobel Prizes, one would not describe Hans Krebs or Hugo Theorell as ‘theoretically skilled’, although Otto Meyerhoff was certainly quantitative. On average, theoretical skills recede into the long tail of the distribution, out of sight of the conventional histories and textbooks. It is high time for a revisionist account of the history of biology to restore quantitative reasoning to its rightful place. The second factor is the enormous success of molecular biology. This is ironic, for many of the instigators of that revolution were physicists: Erwin Schrödinger, Max Delbrück, Francis Crick, Leo Szilard, Seymour Benzer and Wally Gilbert. There was, in fact, a brief window, during the life of physicist George Gamow’s RNA Tie Club, when it was claimed, with poor judgment, that physics and information theory could work out the genetic code [6, 7]. Erwin Chargaff, who first uncovered the complementarity of the A-T and G-C nucleotide pairs (Chargaff’s rules), was nominally a member of the club—his code name was lysine—but I doubt that he was taken in by such theoretical pretensions. He famously described the molecular biology of the time as ‘the practice of biochemistry without a license’ [8]. When Marshall Nirenberg and Heinrich Matthaei came out of nowhere to make the first crack in the genetic code [9], thereby showing that licensing was mandatory—one can just sense the smile on Chargaff’s face—the theorists of the day must have felt that the barbarians were at the gates of Rome. Molecular biology never recovered from this historic defeat of theory and there have been so many interesting genes to characterize since, it has never really needed to. It is the culmination of molecular biology in the genome projects that has finally brought diminishing returns to the one gene, ten PhDs way of life. We now think we know most of the genes and the interesting question is no longer characterizing this or that gene but, rather, understanding how the various molecular components collectively give rise to phenotype and physiology. We call this systems biology. It is a very different enterprise. It has brought into biology an intrusion of aliens and concepts from physics, mathematics, engineering and computer science and a renewed interest in the role of quantitative reasoning and modeling, to which we now turn. Forward and reverse modeling We can distinguish two kinds of modeling strategy in the current literature. We can call them forward and reverse modeling. Reverse modeling starts from experimental data and seeks potential causalities suggested by the correlations in the data, captured in the structure of a mathematical model. Forward modeling starts from known, or suspected, causalities, expressed in the form of a model, from which predictions are made about what to expect. Reverse modeling has been widely used to analyze the post-genome, -omic data glut and is sometimes mistakenly equated with systems biology [10]. It has occasionally suggested new conceptual ideas but has more often been used to suggest new molecular components or interactions, which have then been confirmed by conventional molecular biological approaches. The models themselves have been of less significance for understanding system behavior than as a mathematical context in which statistical inference becomes feasible. In contrast, most of our understanding of system behavior, as in concepts such as homeostasis, feedback, canalization and noise, have emerged from forward modeling. I will focus below on the kinds of models used in forward modeling. This is not to imply that reverse modeling is unimportant or uninteresting. There are many situations, especially when dealing with physiological or clinical data, where the underlying causalities are unknown or hideously complicated and a reverse-modeling strategy makes good sense. But the issues in distilling causality from correlation deserve their own treatment, which lies outside the scope of the present essay [11]. The logical structure of models Mathematical models come in a variety of flavors, depending on whether the state of a system is measured in discrete units (‘off’ and ‘on’), in continuous concentrations or as probability distributions and whether time and space are themselves treated discretely or continuously. The resulting menagerie of ordinary differential equations, partial differential equations, delay differential equations, stochastic processes, finite-state automata, cellular automata, Petri nets, hybrid models,... each have their specific technical foibles and a vast associated technical literature. It is easy to get drowned by these technicalities, while losing sight of the bigger picture of what the model is telling us. Underneath all that technical variety, each model has the same logical structure. Any mathematical model, no matter how complicated, consists of a set of assumptions, from which are deduced a set of conclusions. The technical machinery specific to each flavor of model is concerned with deducing the latter from the former. This deduction comes with a guarantee, which, unlike other guarantees, can never be invalidated. Provided the model is correct, if you accept its assumptions, you must as a matter of logic also accept its conclusions. If ‘Socrates is a man’ and ‘All men are mortal’ then you cannot deny that ‘Socrates is mortal’. The deductive process that leads from assumptions to conclusions involves much the same Aristotelian syllogisms disguised in the particular technical language appropriate to the particular flavor of model being used or, more often, yet further disguised in computer-speak. This guarantee of logical rigor is a mathematical model’s unique benefit. Note, however, the fine print: ‘provided the model is correct’. If the deductive reasoning is faulty, one can draw any conclusion from any assumption. There is no guarantee that a model is correct (only a guarantee that if it is correct then the conclusions logically follow from the assumptions). We have to hope that the model’s makers have done it right and that the editors and the reviewers have done their jobs. The best way to check this is to redo the calculations by a different method. This is rarely easy but it is what mathematicians do within mathematics itself. Reproducibility improves credibility. We may not have a guarantee that a model is correct but we can become more (or less) confident that it is. The practice of mathematics is not so very different from the experimental world after all. The correctness of a model is an important issue that is poorly addressed by the current review process. However, it can be addressed as just described. From now on, I will assume the correctness of any model being discussed and will take its guarantee of logical validity at face value. The guarantee tells us that the conclusions are already wrapped up in the assumptions, of which they are a logical consequence. This is not to say that the conclusions are obvious. This may be far from the case and the deductive process can be extremely challenging. However, that is a matter of mathematical technique. It should not distract from what is important for the biology, which is the set of assumptions, or the price being paid for the conclusions being drawn. Instead of asking whether we believe a model’s conclusions, we should be asking whether we believe the model’s assumptions. What basis do we have for doing so? On making assumptions Biology rests on physics. At the length scales and timescales relevant to biology, physicists have worked out the fundamental laws governing the behavior of matter. If our assumptions can be grounded in physics, then it seems that our models should be predictive, in the sense that they are not subject to falsification—that issue has already been taken care of with the fundamental laws—so that we can be confident of the conclusions drawn. Physicists would make an even stronger claim on the basis that, at the fundamental level, there is nothing other than physics. As Richard Feynman put it, ‘all things are made of atoms and... everything that living things do can be understood in terms of the jigglings and wigglings of atoms’ [12, Chapter 3-3]. This suggests that provided we have included all the relevant assumptions in our models then whatever is to be known should emerge from our calculations. Models based on fundamental physical laws appear in this way to be objective descriptions of reality, which we can interrogate to understand reality. This vision of the world and our place in it has been powerful and compelling. Can we ground biological models on fundamental physical laws? The Schrödinger equation even for a single protein is too hideously complicated to solve directly. There is, however, one context in which it can be approximated. Not surprisingly, this is at the atomic scale of which Feynman spoke, where molecular dynamics models can capture the jigglings and wigglings of the atoms of a protein in solution or in a lipid membrane in terms of physical forces [13]. With improved computing resources, including purpose-built supercomputers, such molecular dynamics models have provided novel insights into the functioning of proteins and multi-protein complexes [14, 15]. The award of the 2013 Nobel Prize in Chemistry to Martin Karplus, Michael Levitt and Arieh Warshel recognizes the broad impact of these advances. As we move up the biological scale, from atoms to molecules, we enter a different realm, of chemistry, or biochemistry, rather than physics. But chemistry is grounded in physics, is it not? Well, so they say but let us see what actually happens when we encounter a chemical reaction A + B C and want to study it quantitatively. To determine the rate of such a reaction, the universal practice in biology is to appeal to the law of mass action, which says that the rate is proportional to the product of the concentrations of the reactants, from which we deduce that d [ C ] dt = k [ A ] [ B ] , where [ -] denotes concentration and k is the constant of proportionality. Notice the immense convenience that mass action offers, for we can jump from reaction to mathematics without stopping to think about the chemistry. There is only one problem. This law of mass action is not chemistry. A chemist might point out, for instance, that the reaction of hydrogen and bromine in the gas phase to form hydrobromic acid, H 2 + Br 2 2 HBr , has a rate of reaction given by d [ HBr ] dt = k 1 [ H 2 ] [ Br 2 ] 3 / 2 [ Br 2 ] + k 2 [ HBr ] , which is rather far from what mass action claims, and that, in general, you cannot deduce the rate of a reaction from its stoichiometry [16]. (For more about the tangled tale of mass action, see [17], from which this example is thieved.) Mass action is not physics or even chemistry, it is phenomenology: a mathematical formulation, which may account for observed behavior but which is not based on fundamental laws. Actually, mass action is rather good phenomenology. It has worked well to account for how enzymes behave, starting with Michaelis and Menten and carrying on right through to the modern era [18]. It is certainly more principled than what is typically done when trying to convert biological understanding into mathematical assumptions. If A is known to activate B—perhaps A is a transcription factor and B a protein that is induced by A—then it is not unusual to find activation summarized in some Hill function of the form d [ B ] dt = M [ A ] h K h + [ A ] h , for which, as Hill himself well understood and has been repeatedly pointed out [19], there is almost no realistic biochemical justification. It is, at best, a guess. The point here is not that we should not guess; we often have no choice but to do so. The point is to acknowledge the consequences of phenomenology and guessing for the kinds of models we make. They are no longer objective descriptions of reality. They can no longer be considered predictive, in the sense of physics or even of molecular dynamics. What then are they? One person who understood the answer was the pharmacologist James Black [20]. Pharmacology has been a quantitative discipline almost since its inception and mathematical models have formed the basis for much of our understanding of how drugs interact with receptors [21]. (Indeed, models were the basis for understanding that there might be such entities as receptors in the first place [2]). Black used mathematical models on the road that led to the first beta-adrenergic receptor antagonists, or beta blockers, and in his lecture for the 1988 Nobel Prize in Physiology or Medicine he crystallized his understanding of them in a way that nobody has ever bettered: ‘Models in analytical pharmacology are not meant to be descriptions, pathetic descriptions, of nature; they are designed to be accurate descriptions of our pathetic thinking about nature’ [22]. Just substitute ‘systems biology’ for ‘analytical pharmacology’ and you have it. Black went on to say about models that: ‘They are meant to expose assumptions, define expectations and help us to devise new tests’. An important difference arises between models like this, which are based on phenomenology and guesswork, and models based on fundamental physics. If the model is not going to be predictive and if we are not certain of its assumptions, then there is no justification for the model other than as a test of its (pathetic) assumptions. The model must be falsifiable. To achieve this, it is tempting to focus on the model, piling the assumptions up higher and deeper in the hope that they might eventually yield an unexpected conclusion. More often than not, the conclusions reached in this way are banal and unsurprising. It is better to focus on the biology by asking a specific question, so that at least one knows whether or not the assumptions are sufficient for an answer. Indeed, it is better to have a question in mind first because that can guide both the choice of assumptions and the flavor of the model that is used. Sensing which assumptions might be critical and which irrelevant to the question at hand is the art of modeling and, for this, there is no substitute for a deep understanding of the biology. Good model building is a subjective exercise, dependent on local information and expertise, and contingent upon current knowledge. As to what biological insights all this might bring, that is best revealed by example. Three models The examples that follow extend from cell biology to immunology to developmental biology. They are personal favorites and illuminate different issues. Learning how to think about non-identical compartments The eukaryotic cell has an internal structure of membrane-bounded compartments—nucleus, endoplasmic reticulum, Golgi and endosomes—which dynamically interact through vesicle trafficking. Vesicles bud from and fuse to compartments, thereby exchanging lipids and proteins. The elucidation of trafficking mechanisms was celebrated in the 2013 Nobel Prize in Physiology or Medicine awarded to Jim Rothman, Randy Schekman and Thomas Sudhof. A puzzling question that remains unanswered is how distinct compartments remain distinct, with varied lipid and protein profiles, despite continuously exchanging material. How are non-identical compartments created and maintained? Reinhart Heinrich and Tom Rapoport address this question through a mathematical model [23], which formalizes the sketch in Figure 1. Coat proteins A and B, corresponding to Coat Protein I (COPI) and COPII, encourage vesicle budding from compartments 1 and 2. Soluble N-ethyl-maleimide-sensitive factor attachment protein receptors (SNAREs) X, U, Y and V are present in the compartment membranes and mediate vesicle fusion by pairing X with U and Y with V, corresponding to v- and t-SNAREs. A critical assumption is that SNAREs are packaged into vesicles to an extent that depends on their affinities for coats, for which there is some experimental evidence. If the cognate SNAREs X and U bind better to coat A than to coat B, while SNAREs Y and V bind better to coat B than to coat A, then the model exhibits a threshold in the relative affinities at which non-identical compartments naturally emerge. Above this threshold, even if the model is started with identical distributions of SNAREs in the two compartments, it evolves over time to a steady state in which the SNARE distributions are different. This is illustrated in Figure 1, with a preponderance of SNAREs X and U in compartment 1 and a preponderance of SNAREs Y and V in compartment 2. Figure 1 Creation of non-identical compartments. Schematic of the Heinrich–Rapoport model, from [23, Figure one], with the distribution of SNAREs corresponding approximately to the steady state with non-identical compartments. 2005 Heinrich and Rapoport. Originally published in Journal of Cell Biology, 168:271-280, doi:10.1083/jcb.200409087. SNARE, soluble N-ethyl-maleimide-sensitive factor attachment protein receptor. The actual details of coats and SNAREs are a good deal more complicated than in this model. It is a parsimonious model, containing just enough biological detail to reveal the phenomenon, thereby allowing its essence—the differential affinity of SNAREs for coats—to be clearly understood. We see that a model can be useful not just to account for data—there is no data here—but to help us think. However, the biological details are only part of the story; the mathematical details must also be addressed. Even a parsimonious model typically has several free parameters, such as, in this case, binding affinities or total amounts of SNAREs or coats. To sidestep the parameter problem, discussed further in the next example, parameters of a similar type are set equal to each other. Here, judgment plays a role in assessing that differences in these parameters might play a secondary role. The merit of this assumption could have been tested by sensitivity analysis [24], which can offer reassurance that the model behavior is not some lucky accident of the particular values chosen for the parameters. The model immediately suggests experiments that could falsify it, of which the most compelling would be in vitro reconstitution of compartments with a minimal set of coats and SNAREs. I was curious about whether this had been attempted and asked Tom Rapoport about it. Tom is a cell biologist [25] whereas the late Reinhart Heinrich was a physicist [26]. Their long-standing collaboration (they were pioneers in the development of metabolic control analysis in the 1970s) was stimulated by Tom’s father, Samuel Rapoport, himself a biochemist with mathematical convictions [27]. Tom explained that the model had arisen from his sense that there might be a simple explanation for distinct compartments, despite the complexity of trafficking mechanisms, but that his own laboratory was not in a position to undertake the follow-up experiments. Although he had discussed the ideas with others who were better placed to do so, the field still seemed to be focused on the molecular details. The model makes us think further, as all good models should. The morphology of a multicellular organism is a hereditary feature that is encoded in DNA, in genetic regulatory programs that operate during development. But what encodes the morphology of the eukaryotic cell itself? This is also inherited: internal membranes are dissolved or fragmented during cell division, only to reform in their characteristic patterns in the daughter cells after cytokinesis. Trafficking proteins are genetically encoded but how is the information to reform compartments passed from mother to daughter? The Heinrich–Rapoport model suggests that this characteristic morphology may emerge dynamically, merely as a result of the right proteins being present along with the right lipids. This would be a form of epigenetic inheritance [28], in contrast to the usual genetic encoding in DNA. Of course, DNA never functions on its own, only in concert with a cell. The Heinrich–Rapoport model reminds us that the cell is the basic unit of life. Somebody really ought to test the model. Discrimination by the T-cell receptor and the parameter problem Cytotoxic T cells of the adaptive immune system discriminate between self and non-self through the interaction between the T-cell receptor (TCR) and major histocompatibility complex (MHC) proteins on the surface of a target cell. MHCs present short peptide antigens (eight amino acids), derived from proteins in the target cell, on their external surface. The discrimination mechanism must be highly sensitive, to detect a small number of strong agonist, non-self peptide-MHCs (pMHCs) against a much larger background of weak agonist, self pMHCs on the same target cell. It must also be highly specific, since the difference between strong- and weak-agonist pMHCs may rest on only a single amino acid. Discrimination also appears to be very fast, with downstream signaling proteins being activated within 15 seconds of TCR interaction with a strong agonist pMHC. A molecular device that discriminates with such speed, sensitivity and specificity would be a challenge to modern engineering. It is an impressive demonstration of evolutionary tinkering, which Grégoire Altan-Bonnet and Ron Germain sought to explain by combining mathematical modeling with experiments [29]. The lifetime of pMHC-TCR binding had been found to be one of the few biophysical quantities to correlate with T-cell activation. Specificity through binding had previously been analyzed by John Hopfield in a classic study [30]. He showed that a system at thermodynamic equilibrium could not achieve discrimination beyond a certain minimum level but that with sufficient dissipation of energy, arbitrarily high levels of discrimination were possible. He suggested a ‘kinetic proofreading’ scheme to accomplish this, which Tim McKeithan subsequently extended to explain TCR specificity [31]. pMHC binding to the TCR activates lymphocyte-specific protein tyrosine kinase (LCK), which undertakes multiple phosphorylations of TCR accessory proteins and these phosphorylations are presumed to be the dissipative steps. However, the difficulty with a purely kinetic proofreading scheme is that specificity is purchased at the expense of both sensitivity and speed [32]. Previous work from the Germain laboratory had implicated SH2 domain-containing tyrosine phosphatase-1 (SHP-1) in downregulating LCK for weak agonists and the mitogen-activated protein kinase (MAPK), extracellular signal-regulated kinase (ERK), in inhibiting SHP-1 for strong agonists [33]. This led Altan-Bonnet and Germain to put forward the scheme in Figure 2, in which a core kinetic proofreading scheme stimulates negative feedback through SHP-1 together with a slower positive feedback through ERK. The behavior of interlinked feedback loops has been a recurring theme in the literature [34, 35]. Figure 2 Discrimination by the T-cell receptor. Schematic of the Altan-Bonnet–Germain model from [29, Figure two A], showing a kinetic proofreading scheme through a sequence of tyrosine phosphorylations, which is triggered by the binding of the TCR to pMHC, linked with a negative feedback loop through the tyrosine phosphatase SHP-1 and a positive feedback loop through MAPK. MAPK, mitogen-activated protein kinase; pMHC, peptide-major histocompatibility complex; P, singly phosphorylated; PP, multiply phosphorylated; SHP-1, SH2 domain-containing tyrosine phosphatase-1; TCR, T-cell receptor. A parsimonious model of such a system might have been formulated with abstract negative and positive feedback differentially influencing a simple kinetic proofreading scheme. In fact, exactly this was done some years later [36]. The advantage of such parsimony is that it is easier to analyze how the interaction between negative and positive feedback regulates model behavior. The biological wood starts to emerge from the molecular trees, much as it did for Heinrich and Rapoport in the previous example. But the goal here also involves the interpretation of quantitative experimental data. Altan-Bonnet and Germain opted instead for a detailed model based on the known biochemistry. Their model has around 300 dynamical variables. Only the core module is described in the main paper, with the remaining nine modules consigned to the Supplementary Graveyard. Herbert Sauro’s JDesigner software, part of the Systems Biology Workbench [37], is required to view the model in its entirety. The tension between parsimony and detail runs through systems biology like a fault line. To some, and particularly to experimentalists, detail is verisimilitude. The more a model looks like reality, the more it might tell us about reality. The devil is in the details. But we never bother ourselves with all the details. All those phosphorylation sites? Really? All 12 subunits of RNA Pol II? Really? We are always simplifying—ignoring what we think is irrelevant—or abstracting—replacing something complicated by some higher-level entity that is easier to grasp. This is as true for the experimentalist’s informal model—the cartoon that is sketched on the whiteboard—as it is for the mathematician’s formal model. It is impossible to think about molecular systems without such strategies: it is just that experimentalists and mathematicians do it differently and with different motivations. There is much to learn on both sides, for mathematicians about the hidden assumptions that guide experimental thinking, often so deeply buried as to require psychoanalysis to elicit, and for experimentalists about the power of abstraction and its ability to offer a new language in which to think. We are in the infancy of learning how to learn from each other. The principal disadvantage of a biologically detailed model is the attendant parameter problem. Parameter values are usually estimated by fitting the model to experimental data. Fitting only constrains some parameters; a good rule of thumb is that 20% of the parameters are well constrained by fitting, while 80% are not [38]. As John von Neumann said, expressing a mathematician’s disdain for such sloppiness, ‘With four parameters I can fit an elephant and with five I can make him wiggle his trunk’ [39]. What von Neumann meant is that a model with too many parameters is hard to falsify. It can fit almost any data and what explanatory power it might have may only be an accident of the particular parameter values that emerge from the fitting procedure. Judging from some of the literature, we seem to forget that a model does not predict the data to which it is fitted: the model is chosen to fit them. In disciplines where fitting is a professional necessity, such as X-ray crystallography, it is standard practice to fit to a training data set and to falsify the model, once it is fitted, on whether or not it predicts what is important [40]. In other words, do not fit what you want to explain! Remarkably, Altan-Bonnet and Germain sidestepped these problems by not fitting their model at all. They adopted the same tactic as Heinrich and Rapoport and set many similar parameters to the same value, leaving a relatively small number of free parameters. Biological detail was balanced by parametric parsimony. The free parameters were then heroically estimated in independent experiments. I am told that every model parameter was constrained, although this is not at all clear from the paper. What was also not mentioned, as Ron Germain reported, is that ‘the model never worked until we actually measured ERK activation at the single cell level and discovered its digital nature’. We see that the published model emerged through a cycle of falsification, although here it is the model that falsifies the interpretation of population-averaged data, reminding us yet again that the mean may not be representative of the distribution. With the measured parameter values, the model exhibits a sharp threshold at a pMHC-TCR lifetime of about 3 seconds, above which a few pMHCs (10 to 100) are sufficient to trigger full downstream activation of ERK in 3 minutes. Lifetimes below the threshold exhibit a hierarchy of responses, with those close to the threshold triggering activation only with much larger amounts of pMHCs (100,000), while those further below the threshold are squelched by the negative feedback without ERK activation. This accounts well for the specificity, sensitivity and speed of T-cell discrimination but the authors went further. They interrogated the fitted model to make predictions about issues such as antagonism and tunability and they confirmed these with new experiments [29]. The model was repeatedly forced to put its falsifiability on the line. In doing so, the boundary of its explanatory power was reached: it could not account for the delay in ERK activation with very weak ligands and the authors explicitly pointed this out. This should be the accepted practice; it is the equivalent of a negative control in an experiment. A model that explains everything, explains nothing. Even von Neumann might have approved. To be so successful, a detailed model relies on a powerful experimental platform. The OT-1 T cells were obtained from a transgenic mouse line that only expresses a TCR that is sensitive to the strong-agonist peptide SIINFEKL (amino acids 257 to 264 of chicken ovalbumin). The RMA-S target cells were derived from a lymphoma that was mutagenized to be deficient in antigen processing, so that the cells present only exogenously supplied peptides on MHCs. T-cell activation was measured by flow cytometry with a phospho-specific antibody to activated ERK. In this way, calibrated amounts of chosen peptides can be presented on MHCs to a single type of TCR, much of the molecular and cellular heterogeneity can be controlled and quantitative data obtained at the single-cell level. Such exceptional experimental capabilities are not always available in other biological contexts. From micro to macro: the somitogenesis clock Animals exhibit repetitive anatomical structures, like the spinal column and its attendant array of ribs and muscles in vertebrates and the multiple body segments carrying wings, halteres and legs in arthropods like Drosophila. During vertebrate development, repetitive structures form sequentially over time. In the mid 1970s, the developmental biologist Jonathan Cooke and the mathematician Chris Zeeman suggested that the successive formation of somites (bilateral blocks of mesodermal tissue on either side of the neural tube—see Figure 3) might be driven by a cell-autonomous clock, which progressively initiates somite formation in an anterior to posterior sequence as if in a wavefront [41]. They were led to this clock-and-wavefront model in an attempt to explain the remarkable consistency of somite number within a species, despite substantial variation in embryo sizes at the onset of somitogenesis [42]. In the absence of molecular details, which were beyond reach at the time, their idea fell on stony ground. It disappeared from the literature until Olivier Pourquié’s group found the clock in the chicken. His laboratory showed, using fluorescent in situ hybridization to mRNA in tissue, that the gene c-hairy1 exhibits oscillatory mRNA expression with a period of 90 minutes, exactly the time required to form one somite [43]. The somitogenesis clock was found to be conserved across vertebrates, with basic helix-loop-helix transcription factors of the Hairy/Enhancer of Split (HES) family, acting downstream of Notch signaling, exhibiting oscillations in expression with periods ranging from 30 minutes in zebrafish (at 28°C) to 120 minutes in mouse [44]. Such oscillatory genes in somite formation were termed cyclic genes. Figure 3 The somitogenesis clock. Top: A zebrafish embryo at the ten-somite stage, stained by in situ hybridization for mRNA of the Notch ligand DeltaC, taken from [47, Figure one]. Bottom left: Potential auto-regulatory mechanisms in the zebrafish, taken from [47, Figure three A,B]. In the upper mechanism, the Her1 protein dimerizes before repressing its own transcription. In the lower mechanism, Her1 and Her7 form a heterodimer, which represses transcription of both genes, which occur close to each other but are transcribed in opposite directions. Explicit transcription and translation delays are shown, which are incorporated in the corresponding models. Bottom right: Mouse embryos stained by in situ hybridization for Uncx4.1 mRNA, a homeobox gene that marks somites, taken from [52, Figure four]. As to the mechanism of the oscillation, negative feedback of a protein on its own gene was known to be a feature of other oscillators [45] and some cyclic genes, like hes7 in the mouse, were found to exhibit this property. Negative feedback is usually associated with homeostasis—with restoring a system after perturbation—but, as engineers know all too well, it can bring with it the seeds of instability and oscillation [46]. However, Palmeirim et al. had blocked protein synthesis in chick embryos with cycloheximide and found that c-hairy1 mRNA continued to oscillate, suggesting that c-hairy1 was not itself part of a negative-feedback oscillator but was, perhaps, driven by some other oscillatory mechanism. It remained unclear how the clock worked. The developmental biologist Julian Lewis tried to resolve this question in the zebrafish with the help of a mathematical model [47]. Zebrafish have a very short somite-formation period of 30 minutes, suggesting that evolutionary tinkering may have led to a less elaborate oscillator than in other animals. The HES family genes her1 and her7 were known to exhibit oscillations and there was some evidence for negative auto-regulation. Lewis opted for the most parsimonious of models to formalize negative auto-regulation of her1 and her7 on themselves, as informally depicted in Figure 3. However, he made one critical addition by explicitly incorporating the time delays in transcription and translation. Time delay in a negative feedback loop is one feature that promotes oscillation, the other being the strength of the negative feedback. Indeed, there seems to be a trade-off between these features: the more delay, the less strong the feedback has to be for oscillation to occur [48]. Lewis acknowledged the mathematical biologist Nick Monk for alerting him to the importance of delays and Lewis’s article in Current Biology appeared beside one from Monk exploring time delays in a variety of molecular oscillators [49]. The idea must have been in the air because Jensen et al. independently made the same suggestion in a letter [50]. The model parameters, including the time delays, were all estimated on the basis of reasonable choices for her1 and her7, taking into account, for instance, the intronic structure of the genes to estimate transcriptional time delays. Nothing was fitted. With the estimated values, the models showed sustained periodic oscillations. A pure Her7 oscillator with homodimerization of Her7 prior to DNA binding (which determines the strength of the repression) had a period of 30 minutes. As with the Heinrich–Rapoport model, there is no data but much biology. What is achieved is the demonstration that a simple auto-regulatory loop can plausibly yield sustained oscillations of the right period. A significant finding was that the oscillations were remarkably robust to the rate of protein synthesis, which could be lowered by 90% without stopping the oscillations or, indeed, changing the period very much. This suggests a different interpretation of Palmeirim et al.’s cycloheximide block in the chick. As Lewis pointed out, ‘in studying these biological feedback phenomena, intuition without the support of a little mathematics can be a treacherous guide’, a theme to which he returned in a later review [51]. A particularly startling test of the delay model was carried out in the mouse by Ryoichiro Kageyama’s laboratory in collaboration with Lewis [52]. The period for somite formation in the mouse is 120 minutes and evidence had implicated the mouse hes7 gene as part of the clock mechanism. Assuming a Hes7 half-life of 20 minutes (against a measured half-life of 22.3 minutes), Lewis’s delay model yielded sustained oscillations with a period just over 120 minutes. The model also showed that if Hes7 was stabilized slightly to have a half-life only 10 minutes longer, then the clock broke: the oscillations were no longer sustained but damped out after the first three or four peaks of expression [52, Figure six B]. Hirata et al. had the clever idea of mutating each of the seven lysine residues in Hes7 to arginine, on the basis that the ubiquitin-proteasomal degradation system would use one or more of these lysines for ubiquitination. The K14R mutant was found to repress hes7 transcription to the same extent as the wild type but to have an increased half-life of 30 minutes. A knock-in mouse expressing Hes7 K14R/K14R showed, exactly as predicted, the first three to four somites clearly delineated, followed by a disorganized blur (Figure 3). Further work from the Kageyama laboratory, as well as by others, has explored the role of introns in determining the transcriptional delays in the somitogenesis clock, leading to experiments in transgenic mice that again beautifully confirm the predictions of the Lewis model [5355]. These results strongly suggest the critical role of delays in breaking the clock but it remains of interest to know the developmental consequences of a working clock with a different period to the wild type [56]. On the face of it, Julian Lewis’s simple model has been a predictive triumph. I cannot think of any other model that can so accurately predict what happens in re-engineered mice. On closer examination, however, there is something distinctly spooky about it. If mouse pre-somitic mesodermal cells are dissociated in culture, individual cells show repetitive peaks of expression of cyclic genes but with great variability in amplitude and period [57]. In isolation, the clock is noisy and unsynchronized, nothing like the beautiful regularity that is observed in the intact tissue. The simple Lewis model can be made much more detailed to allow for such things as stochasticity in gene expression, additional feedback and cell-to-cell communication by signaling pathways, which can serve to synchronize and entrain individual oscillators [47, 5860]. A more abstract approach can also be taken, in which emergent regularity is seen to arise when noisy oscillators interact through time-delayed couplings [61, 62]. As Andy Oates said to me, such an abstraction ‘becomes simpler (or at least more satisfying) than an increasingly large genetic regulatory network, which starts to grow trunks at alarming angles’. These kinds of ‘tiered models’ have yielded much insight into the complex mechanisms at work in the tissue [63]. The thing is, none of this molecular complexity is present in the Lewis model. Yet, it describes what happens in the mouse with remarkable accuracy. The microscopic complexity seems to have conspired to produce something beautifully simple at the macroscopic level. In physics, the macroscopic gas law, P V=R T, is beautifully simple and statistical mechanics shows how it emerges from the chaos of molecular interactions [64]. How does the Lewis model emerge in the tissue from the molecular complexity within? It is as if we are seeing a tantalizing glimpse of some future science whose concepts and methods remain barely visible to us in the present. Every time I think about it, the hairs on the back of my neck stand up. A mathematical model is a logical machine for converting assumptions into conclusions. If the model is correct and we believe its assumptions then we must, as a matter of logic, believe its conclusions. This logical guarantee allows a modeler, in principle, to navigate with confidence far from the assumptions, perhaps much further than intuition might allow, no matter how insightful, and reach surprising conclusions. But, and this is the essential point, the certainty is always relative to the assumptions. Do we believe our assumptions? We believe fundamental physics on which biology rests. We can deduce many things from physics but not, alas, the existence of physicists. This leaves us, at least in the molecular realm, in the hands of phenomenology and informed guesswork. There is nothing wrong with that but we should not fool ourselves that our models are objective and predictive, in the sense of fundamental physics. They are, in James Black’s resonant phrase, ‘accurate descriptions of our pathetic thinking’. Mathematical models are a tool, which some biologists have used to great effect. My distinguished Harvard colleague, Edward Wilson, has tried to reassure the mathematically phobic that they can still do good science without mathematics [65]. Absolutely, but why not use it when you can? Biology is complicated enough that we surely need every tool at our disposal. For those so minded, the perspective developed here suggests the following guidelines: 1. 1. Ask a question. Building models for the sake of doing so might keep mathematicians happy but it is a poor way to do biology. Asking a question guides the choice of assumptions and the flavor of model and provides a criterion by which success can be judged. 2. 2. Keep it simple. Including all the biochemical details may reassure biologists but it is a poor way to model. Keep the complexity of the assumptions in line with the experimental context and try to find the right abstractions. 3. 3. If the model cannot be falsified, it is not telling you anything. Fitting is the bane of modeling. It deludes us into believing that we have predicted what we have fitted when all we have done is to select the model so that it fits. So, do not fit what you want to explain; stick the model’s neck out after it is fitted and try to falsify it. In later life, Charles Darwin looked back on his early repugnance for mathematics, the fault of a teacher who was ‘a very dull man’, and said, ‘I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics; for men thus endowed seem to have an extra sense’ [66]. One of those people with an extra sense was an Augustinian friar, toiling in the provincial obscurity of Austro-Hungarian Brünn, teaching physics in the local school while laying the foundations for rescuing Darwin’s theory from oblivion [67], a task later accomplished, in the hands of J. B. S. Haldane, R. A. Fisher and Sewall Wright, largely by mathematics. Darwin and Mendel represent the qualitative and quantitative traditions in biology. It is a historical tragedy that they never came together in their lifetimes. If we are going to make sense of systems biology, we shall have to do a lot better. Coat Protein I Extracellular signal-regulated kinase Hairy/Enhancer of Split family lymphocyte-specific protein tyrosine kinase mitogen-activated protein kinase major histocompatibility complex SH2 domain-containing tyrosine phosphatase-1 soluble N-ethyl-maleimide-sensitive factor attachment protein receptor T-cell receptor. I thank Grégoire Altan-Bonnet, Ron Germain, Ryo Kageyama, Julian Lewis, Andy Oates and Tom Rapoport for very helpful comments on their respective models but must point out that the opinions expressed in this paper are mine and that any errors or omissions should be laid at my door. I also thank two anonymous reviewers for their thoughtful comments and Mary Welstead for stringent editorial consultancy. Authors’ Affiliations Department of Systems Biology, Harvard Medical School, Boston, USA 1. Gunawardena J: Some lessons about models from Michaelis and Menten. Mol Biol Cell. 2012, 23: 517-519. 10.1091/mbc.E11-07-0643.PubMed CentralPubMedView ArticleGoogle Scholar 2. Gunawardena J: Biology is more theoretical than physics. Mol Biol Cell. 2013, 24: 1827-1829. 10.1091/mbc.E12-03-0227.PubMed CentralPubMedView ArticleGoogle Scholar 3. Ferrell JE, Machleder EM: The biochemical basis of an all-or-none cell fate switch in Xenopus oocytes. Science. 1998, 280: 895-898. 10.1126/science.280.5365.895.PubMedView ArticleGoogle Scholar 4. Altschuler SJ, Wu LF: Cellular heterogeneity: do differences make a difference?. Cell. 2010, 3: 559-563.View ArticleGoogle Scholar 5. Krebs H: Otto Warburg: Cell Physiologist, Biochemist and Eccentric. 1981, Oxford, UK: Clarendon Press,Google Scholar 6. Watson JD: Genes, Girls and Gamow. 2001, Oxford, UK: Oxford University Press,Google Scholar 7. Kay LE: Who Wrote the Book of Life. A History of the Genetic Code. 2000, Stanford, CA, USA: Stanford University Press,Google Scholar 8. Chargaff E: Essays on Nucleic Acids. 1963, Amsterdam, Holland: Elsevier Publishing Company,View ArticleGoogle Scholar 9. Nirenberg M: The genetic code. Nobel Lectures, Physiology or Medicine 1963–1970. 1972, Amsterdam, Holland: Elsevier Publishing Co,Google Scholar 10. Brenner S: Sequences and consequences. Phil Trans Roy Soc. 2010, 365: 207-212. 10.1098/rstb.2009.0221.View ArticleGoogle Scholar 11. Pearl J: Causality: Models, Reasoning and Inference. 2000, Cambridge, UK: Cambridge University Press,Google Scholar 12. Feynman RP, Leighton RB, Sands M: The Feynman Lectures on Physics. Volume 1. Mainly Mechanics, Radiation and Heat. 1963, Reading, MA, USA: Addison-Wesley,Google Scholar 13. Levitt M: The birth of computational structural biology. Nat Struct Biol. 2001, 8: 392-393. 10.1038/87545.PubMedView ArticleGoogle Scholar 14. Karplus M, Kuriyan J: Molecular dynamics and protein function. Proc Natl Acad Sci USA. 2005, 102: 6679-6685. 10.1073/pnas.0408930102.PubMed CentralPubMedView ArticleGoogle Scholar 15. Dror RO, Dirks RM, Grossman JP, Xu H, Shaw DE: Biomolecular simulation: a computational microscope for molecular biology. Annu Rev Biophys. 2012, 41: 429-452. 10.1146/annurev-biophys-042910-155245.PubMedView ArticleGoogle Scholar 16. Atkins P, de Paula J: Elements of Physical Chemistry. 2009, Oxford, UK: Oxford University Press,Google Scholar 17. Mysels KJ: Textbook errors VII: the laws of reaction rates and of equilibria. J Chem Educ. 1956, 33: 178-179. 10.1021/ed033p178.View ArticleGoogle Scholar 18. Cornish-Bowden A: Fundamentals of Enzyme Kinetics. 1995, London, UK: Portland Press,Google Scholar 19. Weiss JN: The Hill equation revisited: uses and misuses. FASEB J. 1997, 11: 835-841.PubMedGoogle Scholar 20. Black J: A personal view of pharmacology. Annu Rev Pharmacol Toxicol. 1996, 36: 1-33. 10.1146/ ArticleGoogle Scholar 21. Colquhoun D: The quantitative analysis of drug-receptor interactions: a short history. Trends Pharmacol Sci. 2006, 27: 149-157. 10.1016/ ArticleGoogle Scholar 22. Black J: Drugs from emasculated hormones: the principles of syntopic antagonism. Nobel Lectures, Physiology or Medicine 1981–1990. Edited by: Frängsmyr T. 1993, Singapore: World Scientific,Google Scholar 23. Heinrich R, Rapoport TA: Generation of nonidentical compartments in vesicular transport systems. J Cell Biol. 2005, 162: 271-280.View ArticleGoogle Scholar 24. Varma A, Morbidelli M, Wu H: Parametric Sensitivity in Chemical Systems. 2005, Cambridge, UK: Cambridge University Press,Google Scholar 25. Davis TH: Profile of Tom A Rapoport. Proc Natl Acad Sci USA. 2005, 102: 14129-14131. 10.1073/pnas.0506177102.PubMed CentralPubMedView ArticleGoogle Scholar 26. Kirschner M: Reinhart Heinrich (1946–2006). Pioneer in systems biology. Nature. 2006, 444: 700-10.1038/444700a.PubMedView ArticleGoogle Scholar 27. Heinrich R, Rapoport SM, Rapoport TA: Metabolic regulation and mathematical models. Prog Biophys Molec Biol. 1977, 32: 1-82.View ArticleGoogle Scholar 28. Ptashne M: On the use of the word ‘epigenetic’. Curr Biol. 2007, 17: 233-236. 10.1016/j.cub.2007.02.030.View ArticleGoogle Scholar 29. Altan-Bonnet G, Germain RN: Modeling T cell antigen discrimination based on feedback control of digital ERK responses. PLoS Biol. 2005, 3: 1925-1938.View ArticleGoogle Scholar 30. Hopfield JJ: Kinetic proofreading: a new mechanism for reducing errors in biosynthetic processes requiring high specificity. Proc Natl Acad Sci USA. 1974, 71: 4135-39. 10.1073/pnas.71.10.4135.PubMed CentralPubMedView ArticleGoogle Scholar 31. McKeithan TW: Kinetic proofreading in T-cell receptor signal transduction. Proc Natl Acad Sci USA. 1995, 92: 5042-5046. 10.1073/pnas.92.11.5042.PubMed CentralPubMedView ArticleGoogle Scholar 32. Murugan A, Huse DA, Leibler S: Speed, dissipation, and error in kinetic proofreading. Proc Natl Acad Sci USA. 2012, 109: 12034-12039. 10.1073/pnas.1119911109.PubMed CentralPubMedView ArticleGoogle Scholar 33. Štefanová I, Hemmer B, Vergelli M, Martin R, Biddison WE, Germain RN: TCR ligand discrimination is enforced by competing ERK positive and SHP-1 negative feedback pathways. Nat Immunol. 2003, 4: 248-254. 10.1038/ni895.PubMedView ArticleGoogle Scholar 34. Brandman O, Ferrell JE, Li R, Meyer T: Interlinked fast and slow positive feedback loops drive reliable cell decisions. Science. 2005, 310: 496-498. 10.1126/science.1113834.PubMed CentralPubMedView ArticleGoogle Scholar 35. Tsai TY, Choi YS, Ma W, Pomerening JR, Tang C, Ferrell JE: Robust, tunable biological oscillations from interlinked positive and negative feedback loops. Science. 2008, 321: 126-129. 10.1126/science.1156951.PubMed CentralPubMedView ArticleGoogle Scholar 36. François P, Voisinne G, Siggia ED, Altan-Bonnet G, Vergassola M: Phenotypic model for early t-cell activation displaying sensitivity, specificity, and antagonism. Proc Natl Acad Sci USA. 2013, 110: 888-897. 10.1073/pnas.1300752110.View ArticleGoogle Scholar 37. Sauro HM, Hucka M, Finney A, Wellock C, Bolouri H, Doyle J, Kitano H: Next generation simulation tools: the Systems Biology Workbench and BioSPICE integration. Omics. 2003, 7: 355-372. 10.1089/153623103322637670.PubMedView ArticleGoogle Scholar 38. Gutenkunst RN, Waterfall JJ, Casey FP, Brown KS, Myers CR, Sethna JP: Universally sloppy parameter sensitivities in systems biology models. PLoS Comput Biol. 2007, 3: 1871-1878.PubMedView ArticleGoogle Scholar 39. Dyson F: A meeting with Enrico Fermi. Nature. 2004, 427: 297-10.1038/427297a.PubMedView ArticleGoogle Scholar 40. Brünger A: Free R value: a novel statistical quantity for assessing the accuracy of crystal structures. Nature. 1992, 355: 472-475. 10.1038/355472a0.PubMedView ArticleGoogle Scholar 41. Cooke J, Zeeman EC: A clock and wavefront model for control of the number of repeated structures during animal morphogenesis. J Theor Biol. 1976, 58: 455-476. 10.1016/S0022-5193(76)80131-2.PubMedView ArticleGoogle Scholar 42. Cooke J: The problem of periodic patterns in embryos. Phil Trans R Soc Lond B Biol Sci. 1981, 295: 509-524. 10.1098/rstb.1981.0157.View ArticleGoogle Scholar 43. Palmeirim I, Henrique D, Ish-Horowicz D, Pourquié O: Avian hairy gene expression identifies a molecular clock linked to vertebrate segmentation and somitogenesis. Cell. 1997, 91: 639-648. 10.1016/S0092-8674(00)80451-1.PubMedView ArticleGoogle Scholar 44. Pourquié O: The segmentation clock: converting embryonic time into spatial pattern. Science. 2003, 301: 328-330. 10.1126/science.1085887.PubMedView ArticleGoogle Scholar 45. Sassone-Corsi P: Rhythmic transcription with autoregulatory loops: winding up the biological clock. Cell. 1994, 78: 361-364. 10.1016/0092-8674(94)90415-4.PubMedView ArticleGoogle Scholar 46. Åström KJ, Murray RM: Feedback Systems. An Introduction for Scientists and Engineers. 2008, Princeton, NJ, USA: Princeton University Press,Google Scholar 47. Lewis J: Autoinhibition with transcriptional delay: a simple mechanism for the Zebrafish somitogenesis oscillator. Curr Biol. 2003, 13: 1398-1408. 10.1016/S0960-9822(03)00534-7.PubMedView ArticleGoogle Scholar 48. Tyson JJ, Othmer HG: The dynamics of feedback control circuits in biochemical pathways. Progress in Theoretical Biology, Volume 5. Edited by: Rosen R, Snell F. 1978, New York, NY, USA: Academic Press,Google Scholar 49. Monk NAM: Oscillatory expression of Hes1, p53, and NF-κB driven by transcriptional time delays. Curr Biol. 2003, 13: 1409-1413. 10.1016/S0960-9822(03)00494-9.PubMedView ArticleGoogle Scholar 50. Jensen MH, Sneppen K, Tiana G: Sustained oscillations and time delays in gene expression of protein Hes1. FEBS Lett. 2003, 541: 176-177. 10.1016/S0014-5793(03)00279-5.PubMedView ArticleGoogle Scholar 51. Lewis J: From signals to patterns: space, time and mathematics in developmental biology. Science. 2008, 322: 399-403. 10.1126/science.1166154.PubMedView ArticleGoogle Scholar 53. Swinburne IA, Miguez DG, Landgraf D, Silver PA: Intron length increases oscillatory periods of gene expression in animal cells. Genes Dev. 2008, 22: 2342-2346. 10.1101/gad.1696108.PubMed CentralPubMedView ArticleGoogle Scholar 54. Takashima Y, Ohtsuka T, González A, Miyachi H, Kageyama R: Intronic delay is essential for oscillatory expression in the segmentation clock. Proc Natl Acad Sci USA. 2011, 108: 3300-3305. 10.1073/pnas.1014418108.PubMed CentralPubMedView ArticleGoogle Scholar 55. Harima Y, Takashima Y, Ueda Y, Ohtsuka T, Kageyama R: Accelerating the tempo of the segmentation clock by reducing the number of introns in the Hes7 gene. Cell Rep. 2013, 3: 1-7. 10.1016/j.celrep.2012.11.012.PubMedView ArticleGoogle Scholar 56. Oswald A, Oates AC: Control of endogenous gene expression timing by introns. Genome Biol. 2011, 12: 107-10.1186/gb-2011-12-3-107.PubMed CentralPubMedView ArticleGoogle Scholar 58. Giudicelli F, Özbudak EM, Wright GJ, Lewis J: Setting the tempo in development: an investigation of the zebrafish somite clock mechanis. PLoS Biol. 2007, 5: 150-10.1371/journal.pbio.0050150.View ArticleGoogle Scholar 59. Schröter C, Ares S, Morelli LG, Isakova A, Hens K, Soroldoni D, Gajewski M, Jülicher F, Maerkl SJ, Deplancke B, Oates AC: Topology and dynamics of the zebrafish segmentation clock core circuit. PLoS Biol. 2012, 10: 1001364-10.1371/journal.pbio.1001364.View ArticleGoogle Scholar 60. Hanisch A, Holder MV, Choorapoikayil S, Gajewski M, Özbudak EM, Lewis J: The elongation rate of RNA polymerase II in zebrafish and its significance in the somite segmentation clock. Development. 2013, 140: 444-453. 10.1242/dev.077230.PubMedView ArticleGoogle Scholar 61. Morelli LG, Ares S, Herrgen L, Schröter C, Jülicher F, Oates AC: Delayed coupling theory of vertebrate segmentation. HFSP J. 2009, 3: 55-66. 10.2976/1.3027088.PubMed CentralPubMedView ArticleGoogle Scholar 62. Herrgen L, Ares S, Morelli LG, Schröter C, Jülicher F, Oates AC: Intercellular coupling regulates the period of the segmentation clock. Curr Biol. 2010, 20: 1244-1253. 10.1016/j.cub.2010.06.034.PubMedView ArticleGoogle Scholar 63. Oates AC, Morelli LG, Ares S: Patterning embryos with oscillations: structure, function and dynamics of the vertebrate segmentation clock. Development. 2012, 139: 625-639. 10.1242/dev.063735.PubMedView ArticleGoogle Scholar 64. Khinchin AI: Mathematical Foundations of Statistical Mechanics. 1949, New York, NY, USA: Dover Publications Inc,Google Scholar 65. Wilson EO: Letters to a Young Scientist. 2013, New York, NY, USA: Liveright Publishing Corporation,Google Scholar 66. The Autobiography of Charles Darwin. 1809–1882. Edited by: Barlow N. 1958, New York, NY, USA: W. W. Norton and Co, Inc,Google Scholar 67. Mawer S: Gregor Mendel. Planting the Seeds of Genetics. 2006, New York, NY, USA: Abrams,Google Scholar © Gunawardena; licensee BioMed Central Ltd. 2014
dcc4350169cfe450
Thursday, May 31, 2012 Psychic Contributions to Physics Extrasensory Perception of Subatomic Particles (PDF), (HTML) Abstract - A century-old claim by two early leaders of the Theosophical Society to have used a form of ESP to observe subatomic particles is evaluated. Their observations are found to be consistent with facts of nuclear physics and with the quark model of particle physics provided that their assumption that they saw atoms is rejected. Their account of the force binding together the fundamental constituents of matter is shown to agree with the string model. Their description of these basic particles bears striking similarity to basic ideas of superstring theory. The implication of this remarkable correlation between ostensible paranormal observations of subatomic particles and facts of nuclear and particle physics is that quarks are neither fundamental nor hadronic states of superstrings, as many physicists currently assume, but, instead, are composed of three subquark states of a superstring. Occultism and the atom: the curious story of isotopes (PDF), (HTML) For example, Besant and Leadbeater reported in 1908 in the journal The Theosophist their discovery of a variation of neon - five years before the English chemist Frederick Soddy gave the name of "isotopes" to atoms of an element differing in mass. Their colleague, C. Jinarajadasa, who made sketches and notes during their investigative sessions, wrote in 1943 to Professor F. W. Aston, inventor of the mass spectrograph, at Cambridge University, England, informing him that Besant and Leadbeater had discovered in 1907 the neon-22 isotope by psychic means five years before scientists found it. (How the two Theosophists identified isotopes will be explained later). The distinguished scientist replied that he was not interested in Theosophy! Copyright © 2012 by ncu9nc All rights reserved. Texts quoted from other sources are Copyright © by their owners. Wednesday, May 30, 2012 Skeptiko Interview with Dr. Melvin Morse In the skeptiko podcast interview with Dr. Melvin Morse, Morse tells of the case of a child drowning victim who had been underwater for 17 minutes and after being rescued had no heartbeat for an additional 45 minutes. Dr. Morse never saw any sign of life in the patient. At the time he thought she had died. It was only long afterward that he found out she survived. Yet during that time, the patient experienced floating out of her body and remembered being intubated, hearing a phone conversation the doctor had, and hearing the nurses talking about a cat that died. When she regained consciousness she asked the nurses where her friends from heaven were. She also remembered that heaven was "fun". The full interview is here. So by chance or coincidence or fate or whatever, I happened to be in Pocatello, Idaho and there was a child there who had drowned in a community swimming pool. She was documented to be under water for at least 17 minutes. It just so happened that a pediatrician was in the locker room at the same community swimming pool and he attempted to revive her on the spot. His intervention probably saved her life but again, he documented that she had no spontaneous heartbeat for I would say at least 45 minutes, until she arrived at the emergency room. Then our team got there. She was really dead. All this debate over how close do these patients come to death, etc., you know, Alex, I had the privilege of resuscitating my own patients and she was, for all intents and purposes, dead. In fact, I had told her parents that. I said that it was time for them to say goodbye to her. This was a very deeply religious Mormon family. They actually did. They crowded around the bedside and held hands and prayed for her and such as that. She was then transported to Salt Lake City. She lived. She not only lived but three days later she made a full recovery. Alex Tsakiris: And what did she tell you… Dr. Melvin Morse: Her first words, the first words she said when she came out of her coma, she turned to the nurse down at Primary Children’s in Salt Lake City. She says, “Where are my friends?” And then they’d say, “What do you mean, where are your friends?” She’d say, “Yeah, all the people that I met in Heaven. Where are they?” [Laughs] The innocence of a child. So I saw her in follow-up, another one of these odd twists of fate. I happened to be in addition doing my residency and just happened to be working in the same community clinic in that area. My jaw just dropped to the floor when she and her mother walked in. I was like, “What?” I had not even heard that she had lived. I had assumed that she had died. She looked at me and she said to her mother, “There’s the man that put a tube down my nose.” [Laughs] Alex Tsakiris: What are you thinking at that point when she says that? Dr. Melvin Morse: You know, it’s one of those things—I laughed. I sort of giggled the way a teenager would giggle about sex. It was just embarrassing. I didn’t know what to think. Certainly, I’d trained at Johns Hopkins. I thought when you died you died. I said, “What do you mean, you saw me put a tube in your nose?” She said, “Oh, yeah. I saw you take me into another room that looked like a doughnut.” She said things like, “You called someone on the phone and you asked, ‘What am I supposed to do next?’” She described the nurses talking about a cat who had died. One of the nurses had a cat that had died and it was just an incidental conversation. She said she was floating out of her body during this entire time. I just sort of laughed. And then she taps me on the wrist. You’ve got to hear this, Alex. After I laughed she taps me on the wrist and she says, “You’ll see, Dr. Morse. Heaven is fun.” [Laughs] I was completely blown away by the entire experience. I immediately determined that I would figure out what was going on here. This was in complete defiance of everything I had been taught in terms of medicine. Tuesday, May 29, 2012 Meditation Music Here is some music that may help with the meditation I discussed yesterday: • Put a Little Love in Your Heart Think of your fellow man lend him a helping hand put a little love in your heart. • Get Together C'mon people now, Smile on your brother Ev'rybody get together Try and love one another right now • Shower the People Shower the people you love with love Show them the way that you feel If you click on the links above you will go to another site which has the lyrics of the song. If you search the internet, you may be able to find mp3 downloads or youtube videos of this music. Monday, May 28, 2012 How to Tap into Universal Love Update: I have added this post to my web site and made a few updates to it. Please refer to Tapping into Universal Love on the meditation page on my web site for the most recent version of this information. God is love. People who experience being in the presence of God during near death experiences describe having an overwhelming feeling of being loved. God is omnipresent. You can tap into this source of universal love without having a near death experience. To do it you use your spiritual capabilities - the capabilities that all spirits have and that as an incarnated spirit you have access to even though you are incarnated. Spirits interact with their world through their mind. They think of a place they want to go to and they start moving there. They are telepathic, they think of someone and their thoughts go off to that person. Spirits use their mind the way an incarnated person uses tools. Spirits create by using their mind. We use the same word "create" to describe how people use their imagination because it is the same thing. To create a tap into universal love, use your imagination. Imagine a light beam of love coming down to you from above. Hold your hands in front of you with your palms facing upward to receive it. Relax any tension or tightness you may feel in your chest, open your heart, and let the love flow out into the world. Try this meditation: Step 1: Imagine a light beam of love coming down upon you from above. Hold your hands in front of you with your palms facing upward to receive it. Say to yourself, "Love is all around why don't you take it?" (If you know the tune, you may sing it to yourself). Step 2: Relax any tension or tightness you may feel in your chest, open your heart, and imagine love emanating from your heart and flowing out into the world, or to a situation you don't like (to desensitize yourself to the situation), or to someone who might be a problem for you (to develop forgiveness and tolerance). Say to yourself, "Love is all around why don't you make it?" (If you know the tune, you may sing it to yourself). Repeat these two steps for the duration of the meditation session. If you feel like smiling while you do this, go ahead and smile. It is probably an indication that you are doing it right. Also, sometimes smiling a little bit can help you reenter this state. I recommend some music to play during this meditation in this post. Friday, May 25, 2012 The Experience of Oneness I made a minor change to my web page on Varieties of Mystical Experiences in the section Kensho and Kundalini. I added a link to a web page by Christine Farrenkopf discussing scientific research on the changes in brain activity that occur when meditators experience a sense of oneness sometimes referred to as a nondual state. (UPDATE: I changed the link on my web site to go to this post.) From Farrenkopf's web report: The "peak" of meditation is clearly a subjective state, with each individual attaining it in different manners and having different time requirements. However, the sensation and meaning behind this moment is consistent among all who reach it. At the peak, the subjects indicate that they lose their sense of individual existence and feel inextricably bound with the universe. "There [are] no discrete objects or beings, no sense of space or the passage of time, no line between the self and the rest of the universe" (Newberg 119). The subjects then meditated. When they reached the peak, they pulled on a string attached at one end to their finger and at the other to Dr. Newberg.2 This was the cue for Newberg to inject the radioactive tracer into the IV connected to the subject. Because the tracer almost instantly "locks" onto parts of the brain to indicate their activity levels, the SPECT gives a picture of the brain essentially at that peak moment (Newberg 3). The results revealed a marked decrease in the activity of the posterior, superior parietal lobe and a marked increase in the activity of the prefrontal cortex, predominantly on the right side of the brain (Newberg 6). Such changes in activity levels demonstrated that something was going on in the brain in terms of spiritual experience. The next step was to look at what these particular parts of the brain do. Studies of damage suffered to a region of the brain have enabled us to draw conclusions about its role by observing loss of function. It has been concluded that the posterior, superior parietal lobe is involved in both the creation of a three-dimensional sense of self and an individual's ability to navigate through physical space (Journal 216). The region of the lobe in the left hemisphere of the brain allows for a person to conceive of the physical boundaries of his body (Newberg 28). It responds to proprioceptive stimuli, most importantly the movement of limbs. The region of the lobe in the right hemisphere creates the perception of the matrix through which we move. From a subjective point of view, when in the nondual state, it seems like the self disappears and the experiencer becomes "one with everything". From an objective point of view, research on meditators shows that the experience is due to a decrease in activity of the posterior, superior parietal lobe in the brain. These results are consistent with other research which shows that region of the brain is responsible for the sense of self. At first glance, this may seem like a materialist explanation for the experience of oneness, but it is consistent with hypothesis that consciousness is non physical and the brain acts as a filter of consciousness. It indicates that the sense of self is not an objective fact. The sense of self is a subjective opinion, an illusion, produced by the brain. It is also interesting that people who have near death experiences also report a sense of oneness which suggests the experience of oneness is a real experience of our true nature when we are not constrained by the physical brain. Whatever the explanation, the experience of oneness does show that the sense of self and separateness we consider to be our normal reality is merely a subjective opinion. Thursday, May 24, 2012 Mario Beauregard on Near Death Experiences Mario Beauregard's recent article on Near Death Experiences, Near death, explained discusses some cases of NDEs and discusses why skeptical explanations for the phenomena are wrong. He concludes NDEs are strong evidence for the afterlife. The scientific NDE studies performed over the past decades indicate that heightened mental functions can be experienced independently of the body at a time when brain activity is greatly impaired or seemingly absent (such as during cardiac arrest). Some of these studies demonstrate that blind people can have veridical perceptions during OBEs associated with an NDE. Other investigations show that NDEs often result in deep psychological and spiritual changes. These findings strongly challenge the mainstream neuroscientific view that mind and consciousness result solely from brain activity. As we have seen, such a view fails to account for how NDErs can experience—while their hearts are stopped—vivid and complex thoughts and acquire veridical information about objects or events remote from their bodies. NDE studies also suggest that after physical death, mind and consciousness may continue in a transcendent level of reality that normally is not accessible to our senses and awareness. Needless to say, this view is utterly incompatible with the belief of many materialists that the material world is the only reality. His response to criticisms of the article are provided in a second article: Near-death, revisited Corroborated veridical NDE perceptions during cardiac arrest (and several other phenomena discussed in “Brain Wars”) strongly suggest that so-called “scientific materialism” is not only limited, but wrong. In line with this, nearly a century ago, quantum mechanics (QM) dematerialized the classical universe by showing that it is not made of minuscule billiard balls, as drawings of atoms and molecules would lead us to believe. In other words, QM acknowledges that the physical world cannot be fully understood without making reference to mind and consciousness, that is, the physical world is no longer viewed as the primary or sole component of reality (this was well explained by Wolfgang Pauli, one of the founders of QM...) Wednesday, May 23, 2012 Dean Radin: Consciousness and the double-slit interference pattern: Six experiments In the recent past, I have posted about the ability of consciousness to influence the physical world at the level of quantum mechanics and in the brain. This shows that consciousness is not an epiphenomenon, nor is it an illusion, and that conscisouness cannot be produced by matter or physical processes. Many of the founders of quantum mechanics such as Erwin Schrödinger, and Max Planck believed this to be true. Now Dean Radin has published a research paper describing more evidence of this. His latest experiments show how meditators concentrating on an experimental apparatus can cause a photon to change from a wave to a particle. He used a double slit apparatus and found that when meditators concentrated on the apparatus, the interference pattern caused by light waves decreased and the pattern more closely resembled that which would be caused by particles rather than waves. Consciousness and the double-slit interference pattern: Six experiments Dean Radin Leena Michel, Karla Galdamez, Paul Wendland, Robert Rickenbach, and Arnaud Delorme, PHYSICS ESSAYS 25, 2 (2012) Abstract: A double-slit optical system was used to test the possible role of consciousness in the collapse of the quantum wavefunction. The ratio of the interference pattern’s double-slit spectral power to its single-slit spectral power was predicted to decrease when attention was focused toward the double slit as compared to away from it. factors associated with consciousness, such as meditation experience, electrocortical markers of focused attention, and psychological factors including openness and absorption, significantly correlated in predicted ways with perturbations in the double-slit interference pattern. The results appear to be consistent with a consciousness-related interpretation of the quantum measurement problem. © 2012 Physics Essays Publication. [DOI: 10.4006/0836-1398-25.2.157] Tuesday, May 22, 2012 Randi's Unwinnable Prize: The Million Dollar Challenge Last month I wrote a couple of posts in a discussion forum explaining why the Amazing Randi's million dollar challenge has not been won. This is important to understand because pseudoskeptics often claim that because no one has won the prize, all claims of paranormal powers must be false. However, the truth is that that a number of paranormal phenomena have been shown to be genuine and the million dollar prize has not been won because it is simply not a good way to demonstrate paranormal powers. The million dollar challenge requires that the applicant has to beat 1 million to one odds. Setting such a high barrier for success makes sense if you are risking a one million dollar prize. However, 1 million to one odds are much higher than the scientific standard of proof so this challenge is not necessarily the best or fairest way to determine if paranormal abilities are genuine. Designing a test that is fair to both the applicant and the challenger requires sophisticated knowledge of statistics (How many trials would be needed for a psychic to have a 95% confidence level that they could beat million to one odds if their accuracy was 75%?) Most psychics don't have the understanding of statistics necessary to look out for their own interests and therefore most applicants will not be able to demand a protocol that gives them a fair chance of winning. This is the most likely reason no one has won the prize. Furthermore, most applicants who know about Randi or understand the details of the challenge would be reluctant to spend the time, effort and expense of applying because they would not trust it to be a fair test or have confidence that they would be judged fairly or rewarded fairly if they succeeded. • Randi supposedly has said, "I always have an out. (Fate, October 1981)", and "I am a charlatan, a liar, a thief and a fake altogether." (This is reported to have been said on PM Magazine, on July 1st, 1982.) Applicants for the prize have legitimate reasons not to trust Randi. An interview in Will Stoor's book The Heretics quotes Randi in making several deceptive statements. • The prize is in bonds but Randi won't say when the bonds mature or who issued the bonds so no one knows what the prize really is. Why won't he say what the prize really is? Applicants are legitimately afraid the prize is some sort of worthless trick. • The applicant has to pay for their own travel expenses involved in attempting the prize. Why would they do that when they have good reasons not to trust Randi and they don't know what the prize really is? • Randi has a history of making mean spirited statements. He has been forced to retract statements in the past. However, applicants have to sign an agreement not to sue Randi even if he makes makes misleading, defamatory, slanderous, or libelous statements about the psychic. • The applicants for Randi's prize have to prove themselves to a very high statistical standard far beyond the level that is generally considered proof in science experiments. An experiment could be designed to satisfy this standard with fewer than ten trials. However, a psychic, depending on their rate of accuracy, might need hundreds of trials to have a fair chance of obtaining such an unlikely result1. Most psychics won't realize this because they don't have the necessary expertise in science or statistics and this may be the primary reason no applicant has ever won the prize. One scientist who did apply for the prize never heard back from Randi. Why would anyone be willing to spend their time and money to try to win the challenge when they don't trust Randi, or believe that the challenge is fair, or that the prize is real? The challenge is not really serious. Most applicants who understand the details would be reluctant to spend the time, effort and expense of applying because they would not trust it to be a fair test or have confidence that they would be judged fairly or rewarded fairly if they succeeded. The most likely reason no one has won is because most applicants do not have the expertise in statistics needed to demand a protocol that will give them a fair chance of winning. The prize is a publicity stunt designed to give materialist pseudoskeptics a one liner: "Why has no one won the prize?!?!" The correct answer is: ... Because it is not a good way to measure paranormal powers and anyone who understands the situation would have very good reasons not to apply. It is sadly ironic that so many of Randi's followers, who pride themselves on their critical thinking skills, are fooled into thinking this prize is a legitimate test of paranormal phenomena. There are many independent forms of empirical evidence for ESP and the afterlife. The entire movement of pseudoskeptics is based on misdirection. Randi's followers believe they are helping to protect people from fraud, but in fact they themselves are victims of many deceptions perpertrated by the leaders of the pseudoskeptic movement. I discuss this in greater detail on my web page on Skeptical Misdirection. (1) In an experiment to measure psychic ability, there are three numbers that need to be considered: • The first number represents the confidence that the outcome is not due to chance. The million dollar challenge requires the psychic perform at a level that would occur by chance only once in a million times. • The second number is the rate of accuracy of the psychic's abilities. For example, a psychic might have an accuracy of 75% in some task where the probability of being correct by chance is only 50%. • The third number is the number of trials which are needed to give the psychic a high level of confidence that they would win the prize given their rate of accuracy. In order to achieve the required confidence that the psychic's performance is not due to chance, the challenge could require two tests of ten or fewer trials. However, the psychic might not be able to pass such a test if they are not 100% accurate. But, if the psychic is given a sufficient number of trials, they may demonstrate a success rate that, while not 100% accurate, still cannot be explained by chance at the level of confidence demanded by the challenge. In order for the psychic to have a 95% confidence level that they could beat million to one odds if their accuracy was 75%, they might need over 100 trials. Most psychics are not well enough versed in statistics to know how to measure their rate of accuracy or how to calculate the number of trials they need to have a good chance of winning the prize. Skeptical Organisations and Magazines A Guide to the Skeptics at For many years this "prize" has been Randi's stock-in-trade as a media skeptic, but even some other skeptics are skeptical about its value as anything but a publicity stunt. For example, CSICOP founding member Dennis Rawlins pointed out that not only does Randi act as "policeman, judge and jury" but quoted him as saying "I always have an out"! (Fate, October 1981). Contenders have to pay for their own travelling expenses if they want to go to Randi to be tested: Rule 6: "All expenses such as transportation, accommodation and/or other costs incurred by the applicant/claimant in pursuing the reward, are the sole responsibility of the applicant/claimant." Also, applicants waive their legal rights: Rule 7: "When entering into this challenge, the applicant surrenders any and all rights to legal action against Mr. Randi, against any person peripherally involved and against the James Randi Educational Foundation, as far as this may be done by established statutes. This applies to injury, accident, or any other damage of a physical or emotional nature and/or financial, or professional loss, or damage of any kind." Applicants also give Randi complete control over publicity Flim-Flam Flummery: A Skeptical Look at James Randi at Recently I picked up Flim-Flam again. Having changed my mind about many things over the past twenty years, I responded to it much differently this time. I was particularly struck by the book's hectoring, sarcastic tone. Randi pictures psychic researchers as medieval fools clad in "caps and bells" and likens the delivery of an announcement at a parapsychology conference to the birth of "Rosemary's Baby." After debunking all manner of alleged frauds, he opens the book's epilogue with the words, "The tumbrels now stand empty but ready for another trip to the square" – a reference to the French Revolution, in which carts ("tumbrels") of victims were driven daily to the guillotine. Randi evidently pictures himself as the executioner who lowers the blade. In passing, two points might be made about this metaphor: the French Revolution was a product of "scientific rationalism" run amok ... and most of its victims were innocent. The Psychic Challenge by Montague Keen Source: hosted at Now for the more serious bit: first, the $1million prize. Loyd Auerbach, a leading USA psychologist and President of the Psychic Entertainers Association (some 80% of the members of his Psychic Entertainers' Association believe in the paranormal, according to Dr. Adrian Parker, who was on the programme, but given no opportunity to reveal this) exposed some of the deficiencies in this challenge in an article in Fate magazine. Under Article 3, the applicant allows all his test data to be used by the Foundation in any way Mr. Randi may choose. That means that Mr. Randi can pick and chose the data at will and decide what to do with it and what verdict to pronounce on it. Under Article 7, the applicant surrenders all rights to legal action against the Foundation, or Mr. Randi, no matter what emotional, professional or financial injury he may consider he has sustained. Thus even if Mr. Randi comes to a conclusion different from that reached by his judges and publicly denounces the test, the applicant would have no redress. The Foundation and Mr. Randi own all the data. Mr. Randi can claim that the judges were fooled. The implicit accusation of fraud would leave the challenger devoid of remedy. These rules, be it noted, are in stark contrast to Mr. Randi's frequent public assertions that he wanted demonstrable proof of psychic powers. First, his rules are confined to a single, live applicant. No matter how potent the published evidence, how incontestable the facts or rigorous the precautions against fraud, the number, qualifications or expertise of the witnesses and investigators, the duration, thoroughness and frequency of their tests or (where statistical evaluation is possible) the astronomical odds against a chance explanation: all must be ignored. Mr. Randi thrusts every case into the bin labelled 'anecdotal' (which means not written down), and thereby believes he may safely avoid any invitation to account for them. Likewise, the production of a spanner bent by a force considerably in excess of the capacity of the strongest man, created at the request and in the presence of a group of mechanics gathered round a racing car at a pit stop by Mr. Randi's long-time enemy, Uri Geller, would run foul of the small print, which requires a certificate of a successful preliminary demonstration before troubling Mr. Randi himself. A pity, because scientists at Imperial College have tested the spanner, which its current possessor, the researcher and author Guy Lyon Playfair, not unnaturally regards as a permanent paranormal object, and there is a standing challenge to skeptics to explain its appearance. Randi's dishonest claims about dogs by Rupert Sheldrake - from hosted at Beware Pseudo-Skepticism - The Randi Challenge at Since the prize money is in the form of bonds, then it is possible that the bonds are worthless. For example, maybe a lot of the bonds are from corporations that are on the verge of going bankrupt? Or maybe the corporations don't have to pay off the bonds for another 40 years? In our example, Bob had to pay everything back in 24 months... this is called the "maturity" of the bond. Some bonds don't mature for a few years, others don't mature for a few decades. If Randi awards the prize of a bond that doesn't mature for 40 years, then legally I do have a million dollars... but I can't USE the million dollars until the bonds mature! As you can see, there are a lot of different scenarios where the bonds could be LEGALLY worth a million dollars, but in reality they could be worthless. The next logical step is to find out what the bonds are really worth. To do that, I e-mailed Randi at the address he provided on his website. I politely pointed out where it said the prize was in bonds in the Challenge rules, and then I asked what corporations issued the bonds, what the interest rates were, and when the maturity dates are. These are the main factors at determining if the bonds are worthless or not. Randi replied with, "Apply, or go away." I explained to him that I wanted clarification on what he was offering. That this had nothing to do with my claim, but they were questions aimed at getting more information about the Challenge. Randi replied with, "Immediately convertible into money. That's all I'm going to get involved in. Apply, or disappear." Obviously that doesn't answer my question at all. Immediately convertible into how much money? Convertible through who? The Myth of the Million Dollar Challenge at The procedures for the Challenge included several hurdles in favor of, and multiple "outs" for Randi and the JREF that any discerning individual capable of any kind of extraordinary human performance would think twice about (and here I'm not just referring to psychics and the like). While the JREF says that "all tests are designed with the participation and approval of the applicant", this does not mean that the tests are fair scientific tests. The JREF need to protect a very large amount of money from possible "long-range shots", and as such they ask for extremely significant results before paying out - much higher than are generally accepted in scientific research (and if you don’t agree to terms, your application is rejected). Furthermore, applicants must first pass a 'preliminary test', before they are allowed to progress to the actual 'formal' test which pays the million dollars. So an applicant must first show positive results in a preliminary test (yielding results against chance of at least 1000 to 1, apparently), then once through to the next stage they would then have to show positive results against much higher odds to claim the prize (by all reports, at odds of around 1 million to 1). Failure in either test means no cash prize, and a fail beside their name. As a consequence, you might well say "no wonder no serious researcher has applied for the Challenge." Interestingly, this is not the case. Dr Dick Bierman, who has a PhD in physics, informed me that he did in fact approach James Randi about the Million Dollar Challenge in late 1998 At that point Randi mentioned that before proceeding he had to submit this preliminary proposal to his scientific board or committee. And basically that was the end of it. I have no idea where the process was obstructed but I must confess that I was glad that I could devote myself purely to science rather than having to deal with the skeptics and the associated media hypes. Copyright © 2012, 2015 by ncu9nc All rights reserved. Texts quoted from other sources are Copyright © by their owners. Monday, May 21, 2012 Consciousness is not an Illusion or an Epiphenomenon I have updated my web page on Skeptical Fallacies to include a section Consciousness is not an Illusion or an Epiphenomenon Skeptics will sometimes say that consciousness is an illusion or that consciousness is an epiphenomenon of the brain. An epiphenomenon is caused by some phenomenon but cannot affect the phenomenon that causes it. Saying that consciousness is an illusion or an epiphenomenon does not really explain consciousness. See the section Consciousness cannot be Explained as an Emergent Property of the Brain for an explanation of why giving a scientific name to a phenomenon is not the same as explaining it. The Wikipedia article on Epiphenomenon says, An epiphenomenon can be an effect of primary phenomena, but cannot affect a primary phenomenon. In philosophy of mind, epiphenomenalism is the view that mental phenomena are epiphenomena in that they can be caused by physical phenomena, but cannot cause physical phenomena. The Wikipedia article of Ephphenomenalism says: Epiphenomenalism is the theory in philosophy of mind that mental phenomena are caused by physical processes in the brain or that both are effects of a common cause, as opposed to mental phenomena driving the physical mechanics of the brain. The impression that thoughts, feelings or sensations cause physical effects, is therefore to be understood as illusory to some extent. For example, it is not the feeling of fear that produces an increase in heart beat, both are symptomatic of a common physiological origin, possibly in response to a legitimate external threat.[1] If consciousness cannot affect the brain, then consciousness may be an illusion. However, there is significant evidence that consciousness can affect the brain. One form of evidence that consciousness can influence the brain comes from the placebo effect. In certain situations, if a patient is given an inactive substance but is told that he is being given a drug, the patient will experience the effects that the drug is said to cause. One example of this occurs when a patient is given a sugar pill but told it is a pain killer. In this situation, patients report that pain is reduced and in fact studies have indicated that this effect is caused by the production of naturally occurring opioids in the brain. The Wikipedia article on the Placebo Effect says, The phenomenon of an inert substance's resulting in a patient's medical improvement is called the placebo effect. The phenomenon is related to the perception and expectation that the patient has; if the substance is viewed as helpful, it can heal, but, if it is viewed as harmful, it can cause negative effects, which is known as the nocebo effect. The basic mechanisms of placebo effects have been investigated since 1978, when it was found that the opioid antagonist naloxone could block placebo painkillers, suggesting that endogenous opioids are involved.[31] What is significant about the placebo effect is that it requires the patient to believe they are being given a drug. With a real drug like a pain killer, the patient will experience the effects even if they don't know they are being treated with it. However, for the placebo effect to occur, the patient must be conscious of the fact that they are being treated. This shows that conscious awareness of a medical treatment can cause the brain to produce opioids. It shows that consciousness can affect the brain. Another form of evidence that consciousness can affect the brain comes from the phenomenon of self-directed neuroplasticity. Neuroplasticity refers to the ability of neurons in the brain to change their organization or grow. This can occur when someone learns a skill or recovers from an injury. Self-directed neuroplasticity occurs when neurons in the brain change their organization or grow in response to self observation of mental states. One situation where self-directed neuroplasticity occurs is meditation. During meditation, a person will observe, (ie. be conscious of) their inner state: their mental activity and the sensations in their body. This conscious attention has been found to cause changes in the brain. The article Self-Directed Neuroplasticity: A 21st-Century View of Meditation by Rick Hanson, PhD discusses this: One of the enduring changes in the brain of those who routinely meditate is that the brain becomes thicker. In other words, those who routinely meditate build synapses, synaptic networks, and layers of capillaries (the tiny blood vessels that bring metabolic supplies such as glucose or oxygen to busy regions), which an MRI shows is measurably thicker in two major regions of the brain. One is in the pre-frontal cortex, located right behind the forehead. It’s involved in the executive control of attention – of deliberately paying attention to something. This change makes sense because that’s what you're doing when you meditate or engage in a contemplative activity. The second brain area that gets bigger is a very important part called the insula. The insula tracks both the interior state of the body and the feelings of other people, which is fundamental to empathy. So, people who routinely tune into their own bodies – through some kind of mindfulness practice – make their insula thicker, which helps them become more self-aware and empathic. This is a good illustration of neuroplasticity, which is the idea that as the mind changes, the brain changes, or as Canadian psychologist Donald Hebb put it, neurons that fire together wire together. The article Mind does really matter: evidence from neuroimaging studies of emotional self-regulation, psychotherapy, and placebo effect. (Beauregard M. Prog Neurobiol. 2007 Mar;81(4):218-36. Epub 2007 Feb 9) says, The results of these investigations demonstrate that beliefs and expectations can markedly modulate neurophysiological and neurochemical activity in brain regions involved in perception, movement, pain, and various aspects of emotion processing. Collectively, the findings of the neuroimaging studies reviewed here strongly support the view that the subjective nature and the intentional content (what they are "about" from a first-person perspective) of mental processes (e.g., thoughts, feelings, beliefs, volition) significantly influence the various levels of brain functioning (e.g., molecular, cellular, neural circuit) and brain plasticity. Furthermore, these findings indicate that mentalistic variables have to be seriously taken into account to reach a correct understanding of the neural bases of behavior in humans. The scientific evidence from the placebo effect and from self-directed neuroplasticity shows that consciousness cannot be an illusion or an epiphenomenon produced by the brain because consciousness can affect the brain. Friday, May 18, 2012 Consciousness Cannot be Explained as an Emergent Property of the Brain I have updated my web page on Skeptical Fallicies to include a section explaining why Consciousness Cannot be Explained as an Emergent Property of the Brain Some skeptics, when asked to explain how consciousness is produced by the brain will say it is an emergent property. They may say the complexity of the brain somehow causes consciousness to emerge. This is not an actual explanation, it is just a scientific sounding way to say that they cannot explain it. It creates the impression of an explanation without offering any actual explanation. An emergent property is a property that is not necessarily caused by the individual parts of a system but emerges when they are arranged in a certain fashion. For example, a wheel rolls. This is not necessarily a property of matter. Matter might be formed into a solid cube which does not roll. But when matter is arranged in a wheel, it will roll. However, merely stating something is an emergent property is not an explanation. Saying consciousness is an emergent property of the brain does not explain consciousness. When you examine a wheel you can understand why it will roll. The laws of physics explain how the ability to roll is caused by a particular arrangement of matter. When you examine a brain you cannot tell how it produces the subjective experiences of consciousness. Physics cannot explain how the subjective experiences of consciousness, what it is like to feel happy, or what it is like to see blue, or what it is like to feel pain, will arise from a particular arrangements of neurons in the brain. When skeptics say consciousness is an emergent property of the brain, that is not an explanation of consciousness. It is a rhetorical trick used because they cannot explain how consciousness is produced by the brain. They are only applying a scientific sounding name to fool people, including themselves, into thinking it is an explanation. Thursday, May 17, 2012 Another Cause of Skepticism Recently, on an internet discussion forum I read, a skeptic revealed some personal information about himself. He had lost someone close to him at an unexpedly young age. That experience influenced his attitudes God, the afterlife, and psychic phenomena. The experience helped to make him a skeptic. Sometimes skeptics can be extremely provoking, but it is wise to remember that they may be influenced by personal suffering that we know nothing about. I've updated the list of reasons for skepticism on my web site. When some people experience a personal loss, or experience extreme hardship, or feel concern about the extreme hardships of others, they may be unable to understand how God could allow such suffering to occur. As a result, they may feel angry at God or be unable to believe in God. This may cause them adopt materialism and express hostility toward anything that relates to God such as beilef in the afterlife or anything that contradicts materialism such as evidence for psychic phenomena. There may be a lot of skeptics who claim superior rationality but in reality are motivated not by reason but by emotion. This is a big mistake. Don't let loss, disappointment, suffering, or depression turn you away from God or the higher planes. The lower level entities are not your friends, they are not your allies, they cannot help you, they can only be against you. If you turn away from the higher planes, it will be you alone against the universe. What kind of existence will you have among the forces of ignorance, in this life and or in the afterlife, if you are not allied with the light? It might be hard to understand how the system of incarnation and suffering on the earth plane can be good, but there is no other practical alternative than to ally yourself with the light. Wednesday, May 16, 2012 Old Nordic Mediums I've previously blogged about photographs of ectoplasm produced by the physical medium Jack Webber here and here. Here are some interesting Photographs from physical seances demonstrating levitation. An English translation is here. More photos by the same photographer, Sven Türck, are here. The seances were conducted by Nordic Mediums. Tuesday, May 15, 2012 Why Darwinism is False Last week I posted about why belief in the afterlife is not compatible with the belief that natural selection is solely responsible for the diversity of life (Darwinism). Here is an article that discusses some of the other flaws Darwinism. It is a criticism of the book Why Evolution is True by Jerry Coyne. Why Darwinism Is False by Jonathan Wells Darwin called The Origin of Species “one long argument” for his theory, but Jerry Coyne has given us one long bluff. Why Evolution Is True tries to defend Darwinian evolution by rearranging the fossil record; by misrepresenting the development of vertebrate embryos; by ignoring evidence for the functionality of allegedly vestigial organs and non-coding DNA, then propping up Darwinism with theological arguments about “bad design;” by attributing some biogeographical patterns to convergence due to the supposedly “well-known” processes of natural selection and speciation; and then exaggerating the evidence for selection and speciation to make it seem as though they could accomplish what Darwinism requires of them. The actual evidence shows that major features of the fossil record are an embarrassment to Darwinian evolution; that early development in vertebrate embryos is more consistent with separate origins than with common ancestry; that non-coding DNA is fully functional, contrary to neo-Darwinian predictions; and that natural selection can accomplish nothing more than artificial selection—which is to say, minor changes within existing species. Faced with such evidence, any other scientific theory would probably have been abandoned long ago. Judged by the normal criteria of empirical science, Darwinism is false. Its persists in spite of the evidence, and the eagerness of Darwin and his followers to defend it with theological arguments about creation and design suggests that its persistence has nothing to do with science at all.50 The article gives detailed explanations in support of these statements. I don't believe in Creationism. I don't believe Intelligent Design should be taught in schools. However, I do believe there are serious flaws in Darwinism and those should be taught in schools. I also believe scientists should be free to look for empirical and theoretical evidence of Intelligent Design without being persecuted or ostracized. A major source of problems with Darwinism comes from the fact that it is based on belief in methodological naturalism, the philosophy that only natural phenomena should be studied by science. This is a wrong view. Science should be about uncovering the truth and should not have any built in philosophical bias. However, many mainstream scientists, because of their bias towards methodological naturalism, can't objectively assess the evidence for and against Darwinism. Every bit of evidence is interpreted to be consistent with Darwinism in exactly the same Creationists interpret every bit of evidence to agree with the Bible. This is one reason Darwinists are so hostile to their critics. Their philosophical beliefs are threatened by criticism of Darwinism. This philosophical bias in favor of methodological naturalism is similar in the effects of reductionism on scientific thinking. It so strongly influences the thinking of scientists that they cannot conceive of possibilities outside of their preconceived ideas. This cripples their ability to understand consciousness and psychic phenomena. Monday, May 14, 2012 I have updated my web site to include a section on philosophical arguments that the mind is not produced by the brain. There are very good philosophical reasons to believe the mind is not produced by the brain and therefore the mind is non-physical. Peter Williams discusses several reasons for this in his article: Why Naturalists Should Mind about Physicalism, and Vice Versa, (Quodlibet Journal: Volume 4 Number 2-3, Summer 2002.) Williams explains that if the mind and the brain were the same, then all the properties of the mind would be properties of the brain. He then demonstrates that the mind cannot be identical to the brain by giving several examples of properties of the mind that are not properties of the brain. Gary R. Habermas and J.P.Moreland argue against physicalism from the ‘qualia’ of imagined sensory images. Qualia is the subjective feel or texture of conscious experience: "Picture a pink elephant in your mind. Now close your eyes and look at the image. In your mind, you will see a pink property. . . There will be no pink elephant outside you, but there will be a pink image of one in your mind. However, there will be no pink entity in your brain; no neurophysiologist could open your brain and see a pink entity while you are having the sense image. The sensory event has a property – pink – that no brain event has. Therefore, they cannot be identical." [19] To put this another way, the subjective feel of mental experiences such as the feeling of pain, the hearing of sound or the taste of chocolate seems very different from anything that is purely physical: "If the world were only made of matter, these subjective aspects of consciousness would not exist. But they do exist! So there must be more to the world than matter." [20] Williams gives several more examples. These include intentionality, the ability to reason, free will, and moral responsibility. See the linked article for an explanation of why these phenomena demonstrate that the mind cannot be made of matter. Williams concludes: At the very least, the mind has several immaterial properties ... It follows that no merely physical explanation of the mind is possible. More information, including links to further reading, can be found on my web site. Friday, May 11, 2012 Retrocausality demonstrated with quantum entanglement. The phenomenon of retrocausality, where a cause comes after it's effect, seems to occur in some experiments in parapsychology. For example, in micro-pk experiments, people seem to be able to affect a random number generator through their intentions after the random numbers have already been generated. Now retrocausation has been demonstrated by scientists studying quantum entanglement. An article at describes an experiment where photons can be caused to be entangled at an earlier time than at which the cause occurs. This can be done after the photons have been measured or even destroyed. "The fantastic new thing is that this decision to entangle two photons can be done at a much later time," said research co-author Anton Zeilinger, also of the University of Vienna. "They may no longer exist." The article is here. Copyright © 2012, 2013 by ncu9nc All rights reserved. Texts quoted from other sources are Copyright © by their owners. Thursday, May 10, 2012 The Reality of ESP: A Physicist’s Proof of Psychic Abilities, by Russell Targ The Reality of ESP: A Physicist’s Proof of Psychic Abilities by Russell Targ I believe in ESP because I have seen psychic miracles day after day in our government-sponsored investigations. It is clear to me, without any doubt, that many people can learn to look into the distance and into the future with great accuracy and reliability. This is what I call unobstructed awareness or remote viewing (RV). To varying degrees, we all have this spacious ability. There are presently four classes of published and carefully examined ESP experiments that are independently significant, with a probability of chance occurrence of less than one time in a million... The full article is here. Wednesday, May 9, 2012 Karl "Falsifiability" Popper believed the soul was nonmaterial. Many skeptics say theories that contradict materialism are unscientific because those theories are not falsifiable. For a theory to be scientific, it must be testable. For a theory to be testable, it must be falsifiable: there must be a situation, where if the theory is wrong, you can demonstrate it is wrong. For example, you can test the theory of gravity by measuring how objects fall. If they don't accelerate the way the theory of gravity predicts they should, then the theory is wrong, it is falsified. If objects do fall the way the theory predicts, then the theory is right. Skeptics often say belief in spirits or psi is unscientific because any unexplained phenomena can be said to be caused by a spirit or by psi and there is no way to disprove it. What may be surprising to many skeptics is that Karl Popper, who first proposed that falsifibility is necessary for a theory to be scientific, did not believe in materialism. He believed in dualism which holds that the mind is nonmaterial. The wikipedia article on Karl Popper explains falsifiability: Logically, no number of positive outcomes at the level of experimental testing can confirm a scientific theory, but a single counterexample is logically decisive: it shows the theory, from which the implication is derived, to be false. The term "falsifiable" does not mean something is made false, but rather that, if it is false, it can be shown by observation or experiment. Popper's account of the logical asymmetry between verification and falsifiability lies at the heart of his philosophy of science. It also inspired him to take falsifiability as his criterion of demarcation between what is, and is not, genuinely scientific: a theory should be considered scientific if, and only if, it is falsifiable. The wikipedia article on Philosophy of Mind describes Popper as a defender of interactionist dualism espoused by Descartes. Interactionist dualism, or simply interactionism, is the particular form of dualism first espoused by Descartes in the Meditations.[8] In the 20th century, its major defenders have been Karl Popper and John Carew Eccles.[30] It is the view that mental states, such as beliefs and desires, causally interact with physical states.[9] The wikipedia article on Rene Descartes explains that dualism as espoused by Descartes holds that the soul is nonmaterial and does not follow the laws of nature. Descartes in his Passions of the Soul and The Description of the Human Body suggested that the body works like a machine, that it has material properties. The mind (or soul), on the other hand, was described as a nonmaterial and does not follow the laws of nature. Is belief in psi or spirits unscientific? It depends. It depends on what those beliefs are theorized as an explanation of. For example, if you theorize that spirits are an explanation of mediumship, that can be tested. If a medium is communicating with a spirit, the medium should be able to obtain information about the spirit that medium could not otherwise know. If the medium could not obtain any information about the spirit such as their appearance, their personality traits, the things they did in life etc., then the theory that the medium is communicating with a spirit would not pass the test. There is, in fact, a lot of evidence that mediums do communicate with spirits. The medium Mrs Piper passed many such tests. Tuesday, May 8, 2012 Consciousness is not Produced by the Brain I have updated the description of the filter model of the brain on my web site: There is no doubt that the brain and the conscious mind interact. Brain damage can cause loss of some functions of consciousness. Amnesia after a head injury or poor memory due to aging are two examples. Neurological activity can be measured and shown to be associated with mental activity. Nerve impulses from sensory organs result in brain activity, and the conscious mind has awareness of the sensations perceived. When the mind generates the impulse to move, nerve impulses are carried from the brain to the muscles to cause movement. Consciousness is affected by brain activity and it is able to influence brain activity. However this is only a correlation, it is not proof that neurological activity causes consciousness. The correlation between consciousness and brain activity should also exist if the brain is an interface between a nonphysical mind and the physical body. One way to think of this is that the brain is like a filter of consciousness. This is called the filter model of the brain. In the filter model, consciousness is a nonphysical phenomena and the brain filters consciousness while we are incarnated in our physical bodies. The brain could filter some aspects of consciousness the way a colored glass can filter out some wavelengths of light. What passes through the brain filter is a restricted set of conscious faculties that we have while in the physical body. The filter model is superior to the hypothesis that the brain produces consciousness because the filter model explains more evidence. You can damage a filter in two ways. You can clog it or you can punch a hole in it. When brain damage causes loss of function like amnesia, that is like a clog in the filter. When brain injury results in increased function, that is like a hole punched in the filter. An example of increased function is when people have increased psychic abilities after a brain injury. In the filter model one of the functions of the brain is to restrict consciousness. In that case, if you release the conscious mind from the brain as happens during a near death experience you should have expanded, unfiltered, consciousness. This is exactly what happens during a near death experience. People who have NDE's are able to perceive more than they do when in the body. They report seeing in 360 degrees and seeing colors that they do not see when in the body. Blind people report seeing during NDE's. Some near death experiencers report being able to communicate telepathically with other beings. Some report understanding that time is just an illusion or that they seem to have access to all the knowledge in the universe. More at my web site. Monday, May 7, 2012 Belief in the afterlife is not compatible with belief in natural selection. Is belief in the afterlife compatible with the belief that evolution is due solely to natural selection? No, it isn't. There are three reasons. 1. Since the spirit can influence the behavior of an individual, the characteristics of that spirit can influence the fitness of the individual. Therefore fitness is not determined solely by the genetic content of the organism. 2. The ease with which the organism may be controlled by the spirit also affects it's fitness. Species may evolve to more readily respond to the intentions of the spirit. They will develop characteristics that are not a response to environmental factors but are determined by the mechanism by which the spirit interacts with the physical body. 3. Spirit scientists might influence the evolution of the the human species to make it a better vehicle for incarnation. They might do this directly through genetic manipulations in which case mutations would not occur by chance but be inserted by an intelligent entity. Or, spirits might incarnate only into those organisms that have desirable characteristics. In this case fitness might be determined not by nature but by spirits. There is a lot of controversy in our society about the whether natural selection really explains the evolution of life on earth. The strongest evidence against natural selection is the evidence for the afterlife. Given the vast amount of evidence for the afterlife, Darwinists have a lot to be worried about. Their theory is incomplete because it does not consider the effects spirits may have on the fitness of the individual. Friday, May 4, 2012 Evidence for the Afterlife from Quantum Mechanics I added a section about the evidence for the afterlife that comes from quantum mechanics to my web site: When physicists study matter at the atomic level, they find that the properties of matter are not determined until a conscious being percieves that matter. This demonstrates that physical matter depends on consciousness for its existence and therefore consciousness cannot arise from matter. The brain, which is composed of matter, cannot produce consciousness so consciousness must have an existence independent from matter. While this interpretation of quantum mechanics in not universally held by all physicists, some of the original founders of quantum mechanics, including Nobel Prize winners in physics such as Max Planck and Erwin Schrödinger, believed this. More here. Thursday, May 3, 2012 Medium Jack Webber In a previous post, Ectoplasm and Materialization, I linked to photographs of Jack Webber producing ectoplasm during a seance. Since that time I had a chance, in the comment section of Michael Prescott's blog, to ask Zerdini his opinion of their authenticity. His reply was very informative ... The response:: Yes they were: Leon Isaacs, who took the photographs at Webber’s circles, used two cameras placed at different angles…shots using this two-camera technique showed the disposition of trumpets and other objects, establishing that they were not held aloft by any material agency. Isaac’s pictures were taken by flashlight, the source of the light being screened by an infrared filter which suppressed practically all visible light rays and only permitted infrared emanations to pass. In effect there was a brief glow at the instant of exposure which had no harmful effects on the medium. Many of the photos of Jack Webber were taken by a 'Daily Mirror' photographer. Harry Edwards can be seen in some of the photographs as one of the sitters. THE following report occupied the best part of the two centre pages of the Daily Mirror on February 28th, 1939. "Cassandra" is the pen-name of a gentleman on the staff of the Daily Mirror who writes a daily pertinent review on matters in general. He is well known for his cryptic and biting sarcasm, and has, on numbers of occasions, given full vent to his opposition to spiritualism. The séance in question was held in North London at a place to which the medium had never been before, and the people present were complete strangers. Mr. Leon Isaacs had been asked to take infra-red photographs. The problem arose as to the means of transporting the equipment, and since "Cassandra" had a car, he was asked to help this way. Thus the only reason why "Cassandra" was present was because he possessed a car. The article was illustrated by a photograph (Plate No. 20), with the following description beneath it "The medium in a trance, lashed to the chair, while a table leaves the ground and books fly through the air ... a photograph taken during the séance attended by 'Cassandra.' The heading was "Cassandra got a surprise at Séance," and his report, in his caustic manner, reads as follows: "I claim I can bring as much scepticism to bear on spiritualism as any newspaper writer living, and that's a powerful load of scepticism these days. I haven't got an open mind on the subject--I'm a violent, prejudiced unbeliever with a limitless ability to leer at the unknown. At least, I was till last Saturday. And then I got a swift, sharp, ugly jolt that shook most of my pet sneers right out of their sockets. "Picture to yourself a small room in a typical suburban house. In one corner a radio-gramophone. In the centre a ring of chairs. At the far end an armchair." "About a dozen people filed in and sat in the circle. I hope they won't mind my saying it, but they struck me as a credulous collection that would have brought tears of joy to a sharepusher's eyes." "Almost everyone a genuine customer for a lovely phony gold brick." "They sat down and the medium, a young Welsh ex-miner, was then roped to the arm-chair. The photographer and I stood outside the circle. The lights went out and we sailed rapidly into the unknown." "The medium gurgled like water running out of a bath, and we opened up with a strangled prayer." "The circle of believers answered with 'All Hail the Power of Jesu's Name,' and I was told that we were 'on the brink.' I thought we were in Cockfosters, Herts, but I soon began to doubt it when trumpets sprayed with luminous paint shot round the room like fishes in a tank. They hovered like pike in a stream, and then swam slowly about. "The medium snored and struggled for breath." "Hymns, Trumpets" "Somebody put a record on, and we were soon bellowing 'Daisy, Daisy, give me your answer, do.' The trumpets beat time and hurled themselves against the ceiling." "A bell rang." "There was considerable excited laughter, and in a slight hysteria we sang 'There is a green hill far away,' followed by the profane, secular virility of 'John Brown's body.' "A tambourine with 'God is Love' written on it became highly unreasonable, and flew up noisily round our heads. "The rough stertorous breathing of the medium continued, and a faint tapping sound heralded a voice speaking from one of the trumpets that was well adrift from its moorings. A faint, childish voice said in a voice of deep melancholy that it was 'Very, very happy.' More voices spoke." "Water was splashed about (there was none in the room when we started) and books took off from their shelves." "Table moved." "The medium remained lashed to his chair." "A clockwork train ran across the floor." "Suddenly a heavy table slowly left the ground. The man who was sitting next to it said calmly 'The table's gone !' The photographer released his flash-you see the result on the right." "At no time did the medium move from his chair. I swear it." "The table landed with a thump in the middle of the circle. A book that was on it remained in position." "I'll pledge my word that not a soul in the room touched it. It was so heavy that it needed quite a husky fellow to lift it. I felt the weight of it afterward." "What price cynicism ? What price heresy?" "Don't ask me what it all means, but you can't tell me now that these strange and rather terrifying things don't happen." "I was there. I saw them. I went to scoff." "But the laugh is sliding slowly round to the other side of my face." (Signed) 'CASSANDRA.' And this: THE séance reported took place on May 24th, 1939, and occupied two pages of the Sunday Pictorial dated May 28th, 1939. Mr. Gray prefaced his report with an affidavit as follows: "I, BERNARD GRAY, of 27, Barn Rise, Wembley Park, in the County of Middlesex, journalist, make Oath and say as follows - "1. That my description of the incidents enumerated in the Article written by me hereunto annexed and marked 'B.G.' to appear in the issue of the Sunday Pictorial of the Twenty-eighth day of May One thousand nine hundred and thirty-nine under the heading of' I Swear I Saw This Happen' is true. "2. I further make Oath and say that the incidents so described in such Article did occur in my presence." This oath was sworn before a solicitor yesterday. I bound him to his chair, hand and foot, with knots and double knots which a sailor once taught me. Just to make sure he couldn't wriggle out and back without my knowing it, I tied lengths of household cotton from the ropes to the chair legs. And I sewed up the front of his jacket with stout thread. So began my second investigation into the mysteries of Spiritualism. The man I had trussed up was Jack Webber, formerly a Welsh miner. He's now a medium - a man for whom such remarkable claims are made that I selected him for my first test. Through him, I was told, are performed some of the most astonishing miracles of spirit power, physical demonstrations intended to prove the reality of life after death. And in this, my second adventure into Spiritualism during my association with the Sunday Pictorial, I want physical phenomena. Startling deeds, not words, as proof. Not testimonies of people claiming to be healed, not messages from the dead. Just material facts which a materially minded man like me can grasp. I want final and complete conviction. That is more important to me than Hitler, the Axis, or even the threat of war. And that is why I have asked the Editor to allow me-for a while-to leave politics, and go in search of Truth. So we sat, fourteen of us, a cheerful, talkative group of very ordinary people, in a plainly furnished room at Balham, London. There was a Metropolitan policeman. A consulting engineer. A waiter. A postman. A foreman plumber. Several women of various ages. And next to me, between the medium and me, Mr. Harry Edwards, leader of the Balham Psychic Society, by trade a printer. We all held hands loosely, Mr. Webber settled himself back as comfortably as my knots would allow, and out went the light, leaving only a red bulb gleaming dully through the darkness from the middle of the room. Things began to happen immediately. They went on happening with remarkable rapidity, with startling variety, for ninety minutes. But I do not want to recount them in order. For I want to describe first two astonishing happenings which make the rest seem small in contrast. Happenings which I, personally, can only compare with the miracles of the New Testament. I am sitting, remember, only one removed from the medium. An hour of the séance has gone by. The early tenseness, the trace of excitement, which perhaps affected me at the start has disappeared. I am my normal, cool, and vigilant self-alert for any sign of deception, accustomed to the eerie glimmer of light we get from the red bulb near the ceiling. In the corner, so near I can touch him, the medium is breathing heavily, gulping occasionally, moaning uneasily at times, like a man with a nightmare. Suddenly, he gurgles alarmingly, as if making some still greater effort. Before me rises a kind of tablet, rather like a slate, and from the upper surface it sheds a luminous white light. I watch it intently, not in the least perturbed. I saw it in its normal state before the séance started. An ordinary piece of four-ply wood, about a foot long and nine inches wide. Now it hovers in front of the medium's face, its soft radiance lighting his features so clearly I can see the closed eyes and the twitching lips. It moves gently down to his hands and I see quite clearly that the arms are still bound to the chair. The glowing tablet has moved over to me. It hangs motionless so close to my face I feel that if I breathe hard I shall blow it away. "Watch!" says Mr. Edwards, giving my hand a squeeze. Then above the tablet I begin to see something white emerging from the darkness. Almost invisible at first, it grows stronger every moment, like a motor car headlamp advancing through fog ; until I can clearly see it as a diaphanous ellipse, standing on its end, as it were, on the tablet. "Ectoplasm," says Mr. Edwards. "Watch closely in the centre of it !" No need to tell me. My eyes are glued on it, though, I want to emphasize, I'm still cool and unemotional. Now, framed in this luminous halo, I can perceive dimly what appear to be features. They are becoming clearer, easier to trace. There's the nose, and - yes the mouth. The eyes, and, my God ! The eyelids are moving. The tablet moves still closer. The eyes, soft and natural, are looking directly into mine. I jerk myself back to a detached, inquisitive state of mind, examine the thing in front of me closely and searchingly. It's not like the pictures of spirit faces many of us have seen in Spiritualist papers. It's not white and unearthly, like the frame in which it is set. RATHER IS IT A HUMAN FACE - BUT SOFTER, FINER, AND SOMEHOW DIFFERENT. I can trace the cheek-bones fading back from the eyes. The lips, they are quite clear. The chin, rounded and delicate, is silhouetted against the lower rim of the halo. I recognize it suddenly as the face of a very old lady. Just like a lovely miniature - for it is much smaller, now I come to think, than the face of any human adult. "Try and speak to us," says Mr. Edwards, encouragingly. I am watching the lips. They part a little, move with an effort. There's a whisper. What is she saying ? Who is she speaking to ? Yes - I've got it. "Who's she speaking to ?" I ask, without taking my eyes off the face for a second. "You," replies Edwards."Speak to her!" "Who are you ?" I ask, gently. "I am--," she answers, and whispers a name I shall not repeat - it is personal. "I cannot stay," she goes on. "I just want you all to see me. God bless you, my boy ..." The tablet and its burden move away. I can see it floating around our circle. Other sitters are exclaiming that they can see it, quite plainly, that it's wonderful. The tablet returns to me. The features in the miniature are fading, like outlines yielding to the dusk of a summer evening. Now the halo is going too. Only the tablet is left. Its gleam disappears with the suddenness of a light being extinguished. The tablet falls with a clatter at my feet. "Lights on," says a voice instantly. There's the click of a switch. In less than five seconds the whole room is bathed in electric light. Everybody is in his or her place, holding hands. The medium is bound just the same in his chair unconscious in his trance. The deep voice which comes from the medium's corner - they call it the voice of Black Cloud, Webber's Indian spirit "guide" says: "I want the gentleman sitting next to Mr. Edwards to hold the medium's right hand. I want the lady on the left of the medium to hold his left hand." Edwards guides my hand over his knees to the hand of the medium. I feel my fingers seized in a powerful grasp. The pressure tightens till it hurts. I set my teeth and wait. The medium is moaning like a man in pain. I can feel a soft fabric rubbing against my wrist. "Can you feel his coat ?" asks the deep voice in the corner. "I can feel some kind of material on my wrist," I answer, readily. "I am dematerializing his coat and taking it off." Now the coat is rubbing the other side of my wrist. Something drops to the floor with a light, rustling impact. Simultaneously, it seems, somebody presses the switch. The medium is in his shirt-sleeves. He is no longer wearing his coat. Round his arms, over his shirt now, are the ropes, still fastened by my patent knots. The thin strands of cotton from the ropes to the chair are unbroken. On the floor, the medium's jacket. Not a stitch holding the edges together broken. My twisted thread round the button just as I had left it. "That is merely intended to prove to you that the spirit world exists and has power to dematerialize," says the deep voice in the corner, when the lights are off again. "Later I hope to replace the medium's coat." Half an hour later the lady on the other side and I are asked to hold Webber's hands a second time. Again the grip is firm enough to be painful. A rustling. Cloth rubbing against my wrist again. Yes, and now the other side. Lights. Webber is wearing his coat once more. Over and round each arm, the bonds. The cotton intact. The thread just as before. BUT THE BONDS AND THE COTTON ARE OVER THE COAT. "My hand was gripped by his all the time," says the girl across from me, rubbing her fingers. "And I felt the coat go through my wrist. Didn't you ?" Well, those two happenings, or miracles - call them what you like, take a bit of explaining away. There were other things too. Heaps of them. "I can feel a hand on my head," said Mr. Edwards, casually, just as if it were quite a natural thing for a hand to emerge from nowhere. "I can feel something on my head," I said a moment later, and gripped Edwards's hand more tightly to make sure it hadn't been raised. Something was pulling my hair pretty hard. I realized then with a sense of shock that the "something" was definitely fingers, yet rather different from human fingers. They felt sharper, more like claws, seemed almost metallic at the tips. My neighbour chuckled. "I know what they're doing," he said, highly amused. The fingers pulled me firmly by the hair in Edwards's direction, till my head was touching his. My hair was pulled and twisted about for fully a minute. "We're being tied together," said my neighbour, laughing. "Can't you feel your hair being twisted with mine ?" We were tied together, too ! We couldn't separate, and the séance was held up for a moment or two while the lights were put on so that we could be unraveled. "A mischievous trick," said everybody else, laughing at our plight. Mischievous, all right. Inexplicable, too. I'll swear nobody moved before, during, or after the knots were tied in our hair. Frequently throughout the proceedings the luminous trumpets were shooting about the room three at a time, with the speed and accuracy of swallows in flight. "I should like to be absolutely sure nobody is holding them," I said boldly, though I myself considered it impossible. One of the trumpets shot straight at my head with the speed of an express train, pulled up sharp just as it touched my temple, and I cringed expecting a knock-out blow. That tin cone proceeded to run itself on my face and round my head, pressing first the broad end, then the narrow end, against my lips to prove it had no earthly connection at any point on its surface. A bell which I'd seen on a table in a corner rose into the air and rang a rhythmic accompaniment to our singing. A pair of clappers, similar to those used by a dance band drummer, floated about clacking merrily in time with the music. In a powerful bass voice, which has been recorded on gramophone discs, "Reuben" led some of the singing. Toys in the room, illuminated by a strange incandescent glow, leapt from the table and sailed about near the ceiling. A boy, I was told, plays with the toys - a boy who died some years ago. As something moved off the table and began to dart about the room, Mr. Edwards explained that it was a doll. Whatever it was, it settled on my knee, and frolicked up and down my leg. I could feel it as well as see it glowing, like an outsize glowworm. It came to rest finally on my knee. And when the lights came on, I found that it was indeed a toy elephant, such as any child would use in play. You see, therefore, it wasn't a gloomy gathering by any means. The strange pranks with the toys - a clockwork engine wound itself up and ran itself down near the ceiling - distinctly enlivened the proceedings. All these little things, however, paled into insignificance beside a remarkable demonstration of furniture removing by unseen hands. I saw it in passage, because it was outlined against the red light. And of course there were spirit messages for some of the sitters. I do not want to write about them. In this series of articles I am concerned more with incidents. Well, that is my testimony. I cannot explain anything I saw. And although many of my friends will think I've gone crazy - I say again : I SAW IT HAPPEN. Wednesday, May 2, 2012 David Bohm one of the top theoretical quantum physicists believed in parapsychology. I have updated the Eminent Researchers page on my web site to include David Bohm. David Bohm was one of the best quantum physicists of all time and one of the most significant theoretical physicists of the 20th century (wikipedia). David Joseph Bohm FRS[1] (20 December 1917 – 27 October 1992) was an American-born British quantum physicist who contributed to theoretical physics, philosophy of mind, neuropsychology. David Bohm is widely considered to be one of the most significant theoretical physicists of the 20th century.[2] David Bohm was widely considered one of the best quantum physicists of all time.[2] In the article, David Bohm and Jiddo Krishnamurti which appeared in the Skeptical Inquirer, July 2000, Martin Garner wrote that Bohm was favorably impressed with parapsychology including Rupert Sheldrake's morphogenetic fields. Bohm took Uri Geller's psychic phenomena seriously and carried with him a key bent by Geller. Bohm believed in panpsychicsm, in one interview he said, "Even the electron is informed with a certain level of mind," Tuesday, May 1, 2012 Erwin Schrödinger (Nobel Prize in Physics) believed consciousness was not prodced by the brain and could not be explained in physical terms. I have updated the Eminent Researchers page on my web site to include Erwin Schrödinger. Erwin Schrödinger received the Nobel Prize in Physics in 1933. He believed consciousness was not produced by the brain and could not be explained in physical terms. Erwin Rudolf Josef Alexander Schrödinger ( 12 August 1887 – 4 January 1961) was an Austrian born physicist and theoretical biologist who was one of the fathers of quantum mechanics, and is famed for a number of important contributions to physics, especially the Schrödinger equation, for which he received the Nobel Prize in Physics in 1933. In 1935 he proposed the Schrödinger's cat thought experiment.[2]ödinger Schrödinger wrote: Other quotes by Schrödinger: The observing mind is not a physical system, it cannot interact with any physical system. And it might be better to reserve the term "subject" for the observing mind. ... For the subject, if anything, is the thing that senses and thinks. Sensations and thoughts do not belong to the "world of energy." I am very astonished that the scientific picture of the real world around me is deficient. It gives a lot of factual information, puts all our experience in a magnificently consistent order, but it is ghastly silent about all and sundry that is really near to our heart, that really matters to us. It cannot tell us a word about red and blue, bitter and sweet, physical pain and physical delight; it knows nothing of beautiful and ugly, good or bad, God and eternity. Science sometimes pretends to answer questions in these domains, but the answers are very often so silly that we are not inclined to take them seriously. There is obviously only one alternative, namely the unification of minds or consciousnesses. Their multiplicity is only apparent, in truth there is only one mind.ödinger
9ac2f6b5921f00d3
fredag 5 juli 2013 Quantum Contradictions 2: Pauli Exclusion Principle Wolfgang Pauli stated in General Principles of Quantum Mechanics (1958) on the Pauli Exclusion Principle (PEP): • The fact that quantum mechanics yields more states than actually occur in nature (multiD wave function), is still a puzzle and it is hoped that a future theory of elementary particles will bring a deeper insight into the essence of the this restricted choice of nature (symmetric or antisymmetric multiD wave function). But PEP is still a puzzle and can be viewed as the basic puzzle of quantum mechanics which harbors  a basic contradiction from two conflicting ad hoc assumptions: 1. An atom with N electrons is described by a wave function as solution to a linear scalar Schrödinger equation in 3N space dimensions with a richness beyond any reality and imagination, which can viewed as a scientific monster.   2. To balance the richness and come to grips with the monster, a restriction to symmetric or antisymmetric wave functions is made.     An ad hoc rich Ansatz is thus restricted ad hoc, which violates Ockham's Razor requiring science to avoid detours: A rational scientific approach would be to start out with a different mathematical model in the form of a system in N wave functions depending on 3 space dimensions as solution to a coupled system of N scalar wave equations in 3 space dimensions, as suggested by Hartree directly after the multiD Schrödinger equation was presented in 1925, and explored in Many-Minds Quantum Mechanics. Pauli was never happy with his principle, even if it gave him the Nobel Prize in Physics "for the discovery of the Pauli Principle", which he does not hide in his Nobel Lecture (as another contradiction): • Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency.  1 kommentar: 1. I have a question for you. I don't really get what the physical reality would be when you use your interpretation for a free electron that scatters against a potential. For one electron your equation is the same as the Schrödinger equation. But the solution then scatters in all directions. What physical reality does this corresponds to?
34e4129bf794eba8
Saturday, August 4, 2012 Orthogonal: Riemannian Quantum Mechanics [Extra] - Greg Egan "Units Throughout these notes, we've adopted a system where time and distance are measured in identical units. This is the equivalent of setting the speed of light, c, equal to 1 in our own universe (for example, by measuring distances in metres and using the time it takes for light to travel one metre as the corresponding unit of time). In the Riemannian universe, it amounts to choosing units for time such that Pythagoras's Theorem holds true, even when one side of the triangle involves an interval of time rather than space. In the novel Orthogonal, it is found empirically that this is the same as setting the speed of blue light equal to 1. In this section, we will go one step further and choose units for mass and energy such that the “reduced Planck's constant”, ℏ = h/(2π), is equal to 1. Mass and energy are then measured in units with the dimensions of inverse lengths or spatial frequencies — or, equally, inverse times or time frequencies. Our particular choice means that the Planck relationship between frequency ν and energy, E = h ν, becomes E = 2 π ν = ω, where ω is the angular frequency of the wave, and the relationship between spatial frequency κ and momentum is p = 2 π κ = k, where k is the angular spatial frequency. The maximum angular frequency ωm that appears in the Riemannian Scalar Wave equation is then simply equal to the rest mass of the associated particle. Relativistic Energy and Momentum Operators In the non-relativistic quantum mechanics we have discussed so far, we have simply applied the usual Schrödinger equation to the potential energy associated with the force between charged particles, on the basis that non-relativistic classical dynamics in the Riemannian universe is identical to Newtonian mechanics, so long as we treat kinetic energy as positive and choose the sign for the potential energy to be consistent with that. For Riemannian relativistic quantum mechanics, we will need to do things slightly differently. The structure of quantum mechanics in its usual formulation is closely linked to the Hamiltonian form of the corresponding classical mechanics, and in the Riemannian case the momentum conjugate to each coordinate in the Hamiltonian sense is the opposite of the relativistic momentum in the same direction."\ 5 out of 5 No comments: Post a Comment
880cd633f3a1b58c
Saturday, July 5, 2014 Give a Guy a Hammer ... Mathematician Max Tegmark thinks the fundamental reality is math. A review of Our Mathematical Universe. The unknown seems to drive us into conniptions, whether one's habit of thought is theology, science, or formal philosophy. The idea that the fundamental reality of our cosmos might be inexlicable is as foreign to the most advanced scientist as it was to the earliest shaman. So there we are. Physicists are knocking their heads against several walls such as dark energy, the proper interpretation of quantum mechanics, the union of quantum mechanics and relativity / gravity, and of course, the origin of the universe. They have virtually run out of experimental options, the colliders having become as super as they are realistically going to get. What now? One can sense this fix from recent years of the magazine Scientific American, which runs ever more fanciful articles about the nature of the universe under the heading of physics. Speculation is running rampant, and the field seems to be gradually leaving the orbit of reason. What is time? What is space? Quantum foam, strings, etc.. All worthy questions, but far too speculative and sketchy to be fed to lay readers. A recent entrant in the cosmic speculation derby is Max Tegmark with a book about how the universe is all a big mathematical structure. It is an excellent book in most respects, very readable and fair on the known science. Even sensible in a pontifical denouement of social policy. He has the most sterling credentials as an MIT physics professor, cosmologist, and protoge of John Wheeler. I should add that I am no expert in the least respect here, so I am just offering an educated lay perspective on the book and its ideas, as presented. There are excellent aspects also to his cosmological speculations. For instance, he develops a helpful hierarchy of multiverse categories, this being a book largely about multiverses: Level 1 multiverse: This is the notion that inflation during the big bang gave rise not only to the region of space we can see, but to much more. How much more? Hard to say, but it could be rather enormous, all within the product of the big bang we date to ~13.8 billion years ago. Level 2 multiverse: Here the additional notion is added that inflation, the key process that we know of from the big bang, could have been a continuous process, not just producing our universe, but many, indeed an infinite number, of others in a process that is still going on. It adds the idea that these others might have different basic physics- different constants, symmetries, etc. Why this would be is due to the unboundedness of our current theories of what might have gone on. So why not everything possible? Level 3 multiverse: Hugh Everett came up with an interpretation of quantum mechanics that contradicts the Copenhagen interpretation, and posits that the Schrödinger equation never "collapses". It just spawns other realities where events we think occur randomly actually occur in all possibilities, each in its own reality. This does not imply the multiplication of mass and energy into these other universes, but the superposition of an infinity of different possibilities in the mathematical space of quantum mechanics- the Hilbert space- of which we see only one sample at any moment. So it all looks the same as the Copenhagen collapse interpretation. Level 4 multiverse: This is Tegmark's special theory, where not only does the level 2 multiverse generate an infinity of universes with different laws from some originating ur-structure, but even the most basic mathematical structure- his ultimate reality- can differ to generate alternate inflation (or non-inflation) regimes, of evey possible type. Indeed, he speculates that every computable mathematical structure exists and generates its own To be brief, I can easily understand the level 1 multiverse, and don't have a big problem with the level 3 multiverse of quantum mechanics. The others are a different story. Level 2 seems a cop-out, interpreting a lack of knowledge and specification about the universe as a permissive free-for-all where everything possible occurs. The premise is, as Tegmark notes, that our universe has about 32 numbers from which physicists can, in principle, calculate all physical aspects of our universe (not counting the pending conundrums of dark energy and dark matter, among others). And the values of these numbers are, of course, quite important. Any little change here or there would blow us to smithereens. So how did they get set up? There are two basic approaches. The traditional way was to say god did it, end of story. A slightly more updated version is to look into the matter scientifically and keep hunting for simplifying and unifying theories, especially using mathematics. This has been the job of physics for several centuries, and seems to have arrived at a sizeable set of irreducible particles and forces, but can't seem to break through to a universal theory. The most modern way is to say that all the possibilities occur in all possible universes, of which there are an infinity, and we find ourselves, naturally, only in the one that lets glorious us happen. Ergo, the level 2 multiverse. What is the prospect of yet more simplifying and unifying insights into the universe(es)? I have no idea. But the multiverse hypotheses seem to give up prematurely, and to what end? Even with a virtual infinity of universes, the chance that we get one that has 32 numbers, some possibly irrational, and thus almost impossible to get precisely right, ranging over countless orders of magnitude, still seems slim. So I am still rooting for a unifying explanation rather than a ramifying one whose sense is saved only by the anthropic principle. And that is really what we are talking about at this point- a rooting interest in where scientific speculation heads, since no evidence to date decides among these possibilities, and evidence may never do so. Now we get to the weirdest part of the book- the level 4 multiverse, or Tegmark's theory that reality, at its base, is math, not just that it is described by math. And that all possible mathematical structures give rise to their own multi-multiversi, etc., ad infinitum. This is all more than a little fanciful. And his arguments, forming the core of the book and the armature around which so much else is built, are surprisingly weak. The beginning premise is that external reality exists, separate from us, and even separate from us as observers. This is not at all hard to accept. After all, the universe had to roil and moil for quite some time before we were here to observe it, so the people who posit reality as a figment of our imaginations, or quantum-wise demand observation as the requirement of reality, do not have much to stand on. Then Tegmark goes on with the rest of his argument, which I abridge: "If we assume that reality exists independently of humans, then for a description to be complete, it must also we well-defined according to nonhuman entities- aliens or supercomputers, say- that lack any understanding of human concepts." "This means that it [a master theory of everything] must contain no concepts at all! In other words, it must be a purely mathematical theory, with no explanations or 'postulates' as in quantum textbooks ..."  "Taken together, this implies the Mathematical Universe Hypothesis, i.e. that the external physical reality described by the ToE [theory of everything] is a mathematical structure."  "This means that our physical world no only is described by mathematics, but that it is mathematical (a mathematical structure), making us self-aware parts of a giant mathematical object. A mathematical structure is an abstract set of entities with relations between them. The entities have no "baggage": they have no properties whatsoever except those relations." There, in a nutshell, is his argument. Note the slight of hand of getting from a description of reality to the reality itself. He explains himself later on: "I'm writing is rather than corresponds to here, because if two structures are equivalent, then there's no meaningful sense in which they are not one and the same, as emphasized by Israeli philosopher Marius Cohen." I can't say that this is convincing, at least to one untutored in the arts. One can also ask whether the starting premise makes any sense. Why must a universe be describable by any entities at all, human or non-human? It could just exist in some way and for some reason we can not understand or describe. The assumption is that there is a theory of everything, which I would certainly like to see. But I don't think it is a given that such a thing exists, let alone that it needs to have the describability property Tegmark claims for it. It could just as well be undescribable, and filled with the relatively arbitrary properties we actually see. The one thing such a theory must be is consistent enough internally to produce a reality that has the symmetries and durable properties ("laws", constants, etc.) that we see in our versions of physics. And that, of course, is why mathematics is such a useful tool in physics, not because rocks are equations, but because our reality has, necessarily, the kinds of strucures and consistencies that we can use mathematics to describe. The ultimate theory may end up being a beautifully simple equation one can write on a T-shirt (as Tegmark dreams), but we don't know that yet, and it is very hard to see how that could be, with so many simple mathematical structures already known and tested in this respect. Are strings simple? Probably not. And why one would want to theorize our reality as being a math structure ... that is admittedly beyond me. Tegmark claims that, among other benefits, this gets rid of an infinite regress issue, as we look for ever more fundamental particles and principles. (Though we have reached an end in particle terms, not being able to divide the electrons and quarks any further.) Having the most fundamental "one" be a total abstraction, and indeed every possible total abstraction in his level 4 multiverse, buys finality at the cost of nonsensicality, little better than the turtles or deities of yore. Specifically, it is Platonism revived, thinking that what is in our minds (where math is, exclusively) is the fabric of the universe, not its map. Indeed, one suspects in the end that this book is another edition in the old-as-humanity tradition of seeing the origins of the cosmos in the mirror. • The supremes are losing their minds. Hobby Lobby will live in infamy. • Can Muslim companies mess with their employees' healthcare and personal lives too? • The tortured reasoning of turning money into "free" speech. • Voters vote for climate action. How does money vote? • Bob Cringely: Bitcoins have come in from the cold. • Sectarianism, insurrection and theocracy ... not just somewhere else. • What money does to our minds. • On being a disposable worker at Walmart. • Bill Mitchell on European economic policy: groupthink followed by fiasco. • Jobs and the US economy.. have the green shoots finally arrived? We could have been here far, far sooner. • This week in the WSJ: "The more you help unemployed people, the more unemployed people you'll have." • This week in Das Capital: "Accumulation of wealth at one pole is, therefore, at the same time accumulation of misery, agony of toil, slavery, ignorance, brutality, moral degradation, at the opposite pole." • Economic quote of the week, from Joe Stiglitz: "In fact, Geithner’s attempts to justify what the administration did only reinforce my belief that the system is rigged. If those who are in charge of making the critical decisions are so “cognitively captured” by the 1 percent, by the bankers, that they see that the only alternative is to give those who caused the crisis hundreds of billions of dollars while leaving workers and homeowners in the lurch, the system is unfair." No comments: Post a Comment
b60e60cd59d8e0e1
A assimetria quântica entre o tempo e o espaço quinta-feira, agosto 18, 2016 Quantum asymmetry between time and space Joan A. Vaccaro Published 20 January 2016. DOI: 10.1098/rspa.2015.0670 Prof. Joan Vaccaro - Source/Fonte: Griffith University An asymmetry exists between time and space in the sense that physical systems inevitably evolve over time, whereas there is no corresponding ubiquitous translation over space. The asymmetry, which is presumed to be elemental, is represented by equations of motion and conservation laws that operate differently over time and space. If, however, the asymmetry was found to be due to deeper causes, this conventional view of time evolution would need reworking. Here we show, using a sum-over-paths formalism, that a violation of time reversal (T) symmetry might be such a cause. If T symmetry is obeyed, then the formalism treats time and space symmetrically such that states of matter are localized both in space and in time. In this case, equations of motion and conservation laws are undefined or inapplicable. However, if T symmetry is violated, then the same sum over paths formalism yields states that are localized in space and distributed without bound over time, creating an asymmetry between time and space. Moreover, the states satisfy an equation of motion (the Schrödinger equation) and conservation laws apply. This suggests that the time–space asymmetry is not elemental as currently presumed, and that T violation may have a deep connection with time evolution. Data accessibility Electronic supplementary material is available at http://dx.doi.org/10.1098/rspa.2015.0670 or via http://rspa.royalsocietypublishing.org. Competing interests I have no competing interests. I did not receive external funding for the research reported here. I thank D.T. Pegg, H.M. Wiseman, M.J. Hall and T. Croucher for helpful discussions. Received September 28, 2015. Accepted December 23, 2015. © 2016 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
bb9c8930aaa31b35
Cover Story Unlike linear waves, solitons create their own channel as they travel in a uniform medium, remaining localized and preserving their shape. Whereas linear waves always pass through one another, solitons can be dramatically altered by... more>> Massive Soliton WDM Transmission at N × 10 Gbit/sec, Error-free Over Transoceanic Distances We have demonstrated massive wavelength division multiplexing (WDM), over transoceanic distances, in multiples of 10 Gbit/sec. The vital ingredients to this success were first, solitons, second, sliding-frequency guiding filters, and third, the use of "dispersion-tapered" fiber spans between amplifiers, i.e., spans for which D(z) tends to follow (here in step-wise approximation), the same exponential decay profile as the signal energy. Although the first two ingredients and their benefits are by now well known, the third, at least in this context, is both novel and vital. more>> Bright Temporal Soliton-like Pulses in Self-defocusing Media Bright temporal solitons are generating a great deal of interest because of their possible use in long distance optical fiber communications. They maintain their temporal shape by cancelling the detrimental effects of both the dispersion and the Kerr nonlinearity. Until recently, the only demonstration of bright temporal solitons has been in silica optical fibers, which possess a self-focusing nonlinearity (n2 > 0) and an anomalous dispersion (β2 < 0) for λ > 1312 nm. The nonlinear Schrödinger equation (NLS), which describes the propagation of an optical pulse through a nonlinear optical medium with chromatic dispersion, allows bright temporal solitons as long as the nonlinear refraction (n2) is of opposite sign to the dispersion (β2). Hence, a medium with normal dispersion and self-defocusing nonlinearity should also support bright temporal solitons. more>> New Semiconductor Materials Offer Promise for Ultra-fast Optical Devices Future communications and computing systems will require advanced capabilities to handle the increasing requirements for ever-faster and higher-bandwidth operation. To move beyond the system limits imposed by electronic components, researchers are investigating the use of all-optical components for ultrafast operations such as switching and time-division multiplexing and demultiplexing. more>> Manakov Spatial Solitons The Manakov soliton is a two-component soliton that was first considered by Manakov in the early 1970s. Based on the work of Zakharov and Shabat, Manakov found that the coupled nonlinear Schrödinger (CNSE) equations with special choice of the coefficients in front of nonlinear terms can be solved exactly. This system is inte¬grable and solitons have therefore a number of special properties which might be useful in practice. more>> Novel Resonant Structures for Laser Light Modulation We have recently developed and demonstrated, for the first time, novel resonant grating/waveguide structures, that can modulate laser light at relatively high rates. We believe that these can be incorporated into arrays to form compact spatial light modulators operating at several hundred megahertz. more>> Optical Modulation with a Resonant Tunneling Diode Since the discovery that the resonant tunneling diode (RTD) has sufficient negative differential resistance for practical devices most work has concentrated on entirely electronic devices for microwave applications and for sub-millimeter wave generation where oscillation at frequencies up to 720 GHz have been achieved. Recently we reported on an optoelectronic application of a resonant tunneling diode in which we used a simple direct integration scheme to achieve optoelectronic modulation from the electric field associated with the RTD, by embedding the RTD directly in an optical waveguide. more>> Optoelectronics Technology Consortium 32-Channel Parallel Fiber Optic Transmitter/Receiver Array Testbed In the data communications industry, there are emerging requirements for a short distance (tens to hundreds of meters), high-speed (200 Mbit/sec-1 Gbit/sec) data bus for large computing environments, clustered parallel computing systems, and datacom switching. In response to these requirements, a parallel optical fiber interconnect has been developed by the Optoelectronics Technology Consortium (OETC), an ARPA-funded industry alliance including IBM, AT&T, Honeywell, and Lockheed Martin. This year, IBM completed testing of a 32-channel OETC fiber optic transmitter/receiver array in a product testbed, and announced future availability of a commercial product called "Jitney" based on the OETC prototype. more>> Opto-Electronic Microwave Oscillator Photonic applications are important in RF communication systems to enhance many functions including remote transfer of antenna signals, carrier frequency up or down conversion, antenna beam steering, and signal filtering. Many of these functions require reference frequency oscillators. However, traditional microwave oscillators cannot meet all the requirements of photonic communication systems that need high frequency and low phase noise signal generation. Because photonic systems involve signals in both optical and electrical domains, an ideal signal source should be able to provide electrical and optical signals. In addition, it should be possible to synchronize or control the signal source by both electrical and optical means. more>> Simultaneous Laser Diode Emission and Detection for Optical Fiber Sensors Reflective fiber optic intensity sensors often use a coupler to guide part of the reflected light back to a photodetector. We have demonstrated a sensor that requires no coupler, and instead uses the emitting laser diode for photodetection. A laser diode operating at constant current can detect light reflected into the junction region if the terminal voltage is monitored. We used a self-detecting source to sense rotation using a simple magneto-optic transducer and single-fiber sensor system. more>> Compact and Parallel Free-Space Optoelectronic Interconnection and Logic Operations with Optical Thyristors For several years there has been a realization that electronics is facing limits in both the speed and parallelism that may be achieved with conventional wiring. This is particularly apparent for chip-to-chip and board-to-board interconnects. Optics has been widely studied as a suitable high-speed interconnection medium to overcome this interconnection bottleneck. It is attractive due to its better immunity to capacitative and inductive crosstalk, signal dispersion, and electromagnetic interference. more>> Three Dimensional Reconstruction of Random Radiation Sources The degree of the spatial coherence of an electromagnetic field is a useful function mainly for two reasons. First it provides information on the spatial coherence of light sources. Second, due to the Van Cittert-Zernike theorem, knowledge of the coherence distribution induced by a source, enables one to compute its shape. Explicitly, it is manifested in this theorem that the two-point degree of coherence in the far field of a quasi-monochromatic, spatially incoherent light source is proportional to the Fourier transform of the source's planar intensity distribution. Therefore, by measuring the two-point degree of coherence in the far field, one can image the source distribution. This imaging technique is, among others, the theoretical basis of the very long base-line interferometers used in astronomy. However, this technique has been limited to imaging of planar two-dimensional objects. more>> Room Temperature, Mid-Infrared Quantum Cascade Lasers The quantum cascade (QC) laser1 is a new optical source in which one type of carrier, typically electrons, cascading down an electronic staircase... more>> Achievement of the Saturation Limit and Energy Extraction in a Discharge Pumped Table-Top Soft X-ray Amplifier Amajor goal in ultrashort wavelength laser research is the development of compact "table-top" amplifiers capable of generating soft X-ray pulses of substantial energy that can impact applications. Such development motivates the demonstration of gain media generated by compact devices, that can be successfully scaled in length to reach gain saturation. At this condition, which occurs when the laser intensity reaches the saturation intensity, a large fraction of the energy stored in the laser's upper level can be extracted. To date, gain saturation had only been achieved in a few soft X-ray laser transitions in plasmas generated by some of the world's largest laser facilities. more>> Multiple-wavelength Vertical Cavity Laser Arrays with Wide Wavelength Span and High Uniformity Vertical-cavity surface-emitting lasers (VCSELs) are promising for numerous applications. In particular, due to their inherent single Fabry-Perot mode operation, VCSELs can be very useful for wavelength division multiplexing (WDM) systems allowing high bandwidth and high functionalities. Multiple wavelength VCSEL arrays with wide channel spacings ( 10 nm) provide an inexpensive solution to increasing the capacity of local area networks without using active wavelength controls. more>> New Techniques in Wideband Terahertz Spectroscopy In recent years, remarkable progress has been made in the development of spectroscopic capabilities for coherent terahertz (THz) measurements. This spectral region is one of great interest because of the abundance of excitations in molecular systems and condensed media. It also represents a region in which the dielectric properties of materials are of critical importance for high frequency electronics and optoelectronics. A key ingredient to the significant advances in this field is the development of broadband, optically driven sources and detectors of terahertz radiation. The ready availability of laser pulses with durations of ~10 fsec suggests the potential for extending the bandwidth of coherent spectroscopy to significantly higher frequencies. By using materials with an instantaneous nonlinear optical response for both emission and detection, we may be able to capture much of this enormous bandwidth. more>> After Image Interferometric Optical Tweezers Optical trapping of micron-size, dielectric microspheres using a single beam gradient force (Fig. la) was first demonstrated by Ashkin in 1986. Since then, extensive research and development of this technique has turned it into a practical device (known as optical tweezers) which has been used in a wide variety of biological and biomedical applications. more>> Optical Patterning of Three-Dimensional Spatio-Tensorial Micro-Structures in Polymers One challenging requirement for the design of devices for photonic applications is to achieve complete manipulation of molecular order. The great latitude and flexibility of optical methods offers interesting prospects for material engineering using light-matter interactions. Efficient spatial modulation of polymer macroscopic properties is usually achieved using holographic recording of an interference pattern between intense light-waves. For second-order optical nonlinear processes, a full control of the molecular orientation is mandatory. However, patterning with polarized monochromatic beams results only in molecular alignment. We report on a new, purely optical technique based on a non-classical holographic process with coherent mixing of dual-frequency fields. It enables efficient and complete three-dimensional spatio-tensorial control of polymer micro-structures. more>> Spontaneous Density Grating Formation in Hot Atomic Vapor Recently a new gain mechanism has been observed in a nonlinear optical system: The spontaneous formation of a density grating in an atomic vapor through interaction with a strong pump field. A sodium filled cell is pumped by a high intensity (I 104 W/cm2) circularly polarized laser beam detuned from resonance and is probed by a weak field degenerate in frequency with the pump and with the same polarization. The probe beam is introduced into the cell in two different geometrical configurations: Nearly parallel (angle 5°) and nearly antiparallel (same angle, but opposite direction) to the pump. For sufficiently high pump intensity, and for appropriate values of detuning and atomic density, the probe beam displays a gain as large as 30% (pumping only a small fraction of the probe cross section) at the expense of the pump, only in the nearly counterpropagating geometry. more>> Single-Atom Quantum Logic Gate and Schrödinger Cat State One of the fundamental tenets of quantum mechanics is the existence of superposition states, or states whose properties simultaneously possess two or more distinct values. Although quantum superpositions and entanglements seldom appear outside of the microscopic quantum world, there is growing interest in the creation of "big" superpositions and massively entangled states for use in applications such as a quantum computer. We report first steps toward this goal by demonstrating a fundamental two-bit quantum logic gate and a "Schrödinger cat"-like state of motion with a single trapped 9Be+ ion. Both experiments allow sensitive measurements of decoherence mechanisms which will play an important role in the feasibility of quantum computation. more>> Polarization-entangled Photons and Quantum Dense Coding Entangled states of particles form the cornerstone of the newly emerging field of quantum information: they are central to tests of nonlocality, have been proposed for use in quantum cryptography schemes, and would arise automatically in the operation of quantum computers. Polarization-entangled photons are preferable because they are easier to handle. more>> Excitation of a Schrödinger Cat State Within an Atom Experiments in a number of laboratories over the past few years have explored the classical limit of a single atom. In this limit, the electron wave function takes on the form of a spatially localized wave packet moving with the classical orbital period around a classical Keplerian orbit of near macroscopic dimensions. In various experiments the diameter of this orbit ranges from approximately 100 to 100,000 nm. The behavior of the atom, even in this limit, is quite rich, displaying a range of classical as well as distinctly quantum features. more>> Excess Quantum Noise Fluctuations in Unstable-resonator Lasers Self-Trapping of Partially Spatially Incoherent Light Beams Here, we report the first observation of self-trapping of a "partially" spatially incoherent optical beam in a nonlinear medium. Self-trapping occurs in both transverse dimensions, when diffraction is exactly balanced by photorefractive self-focusing. We have used the photorefractive nonlinearity associated with photorefractive solitons as the self trapping mechanism and generated a stable, two-dimensional, 30-μm wide, spatially incoherent self-trapped beam. more>> Supramolecular Enhancement of Second-Order Optical Nonlinearity Only noncentrosymmetric molecules can possess a second-order nonlinear response, i.e., they have a nonvanishing first molecular hyperpolarizability. Polar molecules with donor and acceptor groups connected by a conjugated π-electron system are traditional organic second-order materials (Fig. 1). For macroscopic noncentrosymmetry, such molecules are poled in a host material using a static electric field. The nonlinear coefficients of poled materials are proportional to μβ where μ is the permanent dipole moment of the molecules and β is the vectorial part of the first hyperpolarizability. more>> Stopping Light in its Tracks To control the speed of a light pulse without absorbing its photons, or distorting its shape, is a challenging problem. However, this has been accomplished using fiber gratings, as part of a joint research program of the University of Sydney, the Australian Photonics Research Centre, Lucent Technologies, and the University of Toronto. more>> Isotropic Liquid Crystal Fiber Structures for Passive Optical Limiting of Short Laser Pulses Ever since the invention of the laser, there has been a need to protect the eye or sensitive optical sensors from damage by overexposure. The problem has become increasingly difficult with the advent of frequency agile high power pulsed lasers, which negate fixed line filters or optoelectronics/mechanical devices; all-optical or nonlinear optical means have to be used. In this context, various device concepts and nonlinear optical materials are being investigated. To satisfy such stringent requirements, it has become necessary to optimize both the device function and the material responses by specialized optical configurations. One means of achieving this is to use fiber or waveguide geometry in which highly intensity dependent (optical limiting) processes occur more efficiently due to the spatial confinement over distances much longer than the Rayleigh range of tightly focused lasers. more>> Texture in Binary Images IImage texture is one of the important parameters in the field of digital image processing. In displayed images, it affects the reproduction of the local average gray level, because usually there is a certain amount of pixel overlap. In image perception, it may result in the appearance of false contours between regions with different textures. There is a demand for a quantitative description of textural characteristics in the various fields of digital image processing, of which digital halftoning is one. more>> Photonic Signal Processing for Biomedical and Industrial Ultrasonic Probes Ultrasonics has been widely used in medical, indus¬trial, and scientific applications. In medical applications, ultrasonics is an essential diagnostic method in internal medicine, urology, and vascular surgery. High-Intensity Focussed Ultrasound (HIFU) and lithotripsy applications use relatively low ultrasonic frequencies (< 100 KHz), while a 5-15 MHz band is typically used in diagnostic external cavity imaging ultrasound. Today, with endoscopic applications in mind, a very high ultrasonic frequency, e.g., 100 MHz, probe with high (> 50%) instantaneous bandwidths is highly desirable as higher frequencies give higher imaging resolution and smaller physical dimensions of the front-end intracavity transducer array. more>> Atomic Lifetimes From Molecular Spectroscopy Although molecular properties are clearly related to the properties of the constituent atoms, it has seldom been possible to make precision measurements of these atomic properties by examining the molecules. Over the last year or so, however, molecular spectroscopy has been shown to be a powerful technique for determining atomic lifetimes and has provided the most precise alkali lifetimes yet reported, at levels ranging from 0.3% to 0.03%. more>> Multiphoton Ionization with Precise Intensity Control In the presence of strong laser fields (> 1012 W/cm2), atoms and molecules can simultaneously absorb many photons to exceed the ionization limit, leading to the ejection of photoelectrons. The analysis of photoelectron kinetic energy spectra provides valuable insight into atomic and molecular structures. The kinetic energy can be determined by measuring the time-of-flight of the electrons over a known distance. more>> Atomic Streak Camera Sees Rydberg Atoms Falling Apart Highly excited or Rydberg atoms are an ideal quantum laboratory. In a Rydberg atom, the loosely bound electron moves in a large Kepler orbit around the atomic nucleus and is very sensitive to external perturbations. For instance, by applying a moderate electric field, the behavior of the quantum system is drastically influenced. A static field of a few Kilovolts per centimeter is sufficient to change the bound Rydberg atom into a system in which the electron can escape. Within a few picoseconds (10-12 sec) the atom falls apart. It is an experimental challenge to detect how this decay actually happens. Does the electron come out immediately, or does the atom emit the electron in subsequent bursts of probability, that are signatures of the quantum nature of the system? more>> Bragg Scattering from an Optical Lattice Two-dimensional Photonic Bandgap Structures at 850 nm The long-predicted benefits of photonic bandgap (PBG) technology, such as complete control over the spontaneous emission of an excited atomic system, are starting to become possible. Following initial doubt about the technological feasibility, a full bandgap was demonstrated at microwave frequencies several years ago. more>> Nanoparticle-Enhanced Photodetection A properly placed layer of metal nanoparticles can increase the optical absorption within a thin photodetector.1 Acting much like an array of microscopic antennas, the particles collect a fraction of the light that falls within their resonance bandwidth, coupling it into guided modes within the photodetector layer. more>> Intracavity Phase Modulated Transmitter for Hybrid Lidar-Radar This paper discusses the development of a microwave-modulated transmitter using a bulk phase modulator for a novel hybrid lidar-radar application. Aerial lidar (light detection and ranging) is used for underwater surveillance. A pulse of blue-green optical radiation is transmitted from an airborne platform, and target information is extracted from the detected echo. Attenuation, dispersion, backscatter clutter, and particularly the lack of coherent signal processing, limit the performance of lidar. more>> An Intuitive User Interface for Remote Adjustment of Optical Elements As part of an ongoing effort to improve the imaging of the Multiple Mirror Telescope (MMT) south of Tucson, our group is concerned with developing adaptive techniques. When a star's light passes through the earth's turbulent atmosphere, the wavefront is distorted and the imaging of a ground-based telescope suffers. The Center for Astronomical Adaptive Optics (CAAO) builds instruments which measure and correct for this wavefront aberration. The wavefront sensing part of the instrument was rebuilt using 18 Picomotors™ to simplify alignment. more>> Nonlinear Optics Using Atomic Coherence Effects Nonlinear optical mixing of existing laser frequencies to access portions of the spectrum where lasing action is not easily obtainable is common practice today. Various techniques, including important new ones like quasi-phasematch¬ing in tailored nonlinear-media are aimed at efficient generation in the region of the spectrum from just under 200 nm in the ultraviolet to about a few microns in the infrared. more>> 1997 Funding for R&D Up: Poised to Plummet toward 2002 The results are in for 1997 federal appropriations for science. According to AAAS, Congress appropriated $74 billion for R&D—an increase of 4.0% from last year. About $14.8 billion of the total goes for basic research, an increase of 2.7%. R&D funding kept ahead of inflation, but the slope downward will have to get steeper to balance the budget by 2002. more>> New Terahertz Beam Imaging Device Recently a new electro-optic detection system has been used to characterize the temporal and spatial distribution of free-space broadband, pulsed electromagnetic radiation (THz beams). This detection system, which uses an electro-optic crystal sensor, provides diffraction-limited spatial resolution, femtosecond temporal resolution, DC-THz spectral bandwidth, and sub¬milli-volt per centimeter field detectability. The sensitivity and bandwidth of the electro-optic detectors are comparable or superior to conventional ultrafast photo-conductive dipole antennas and liquid helium cooled bolometers. Advantages intrinsic to electro-optic detection include nonresonant frequency response, large detector area, high scan rate, low optical probe power, and large linear dynamic range. more>>
39ccea1cae9d1f94
Share This City College Fellowships Program Gabriella Clemente City College Fellows  |  Mellon Mays Fellows  |  Alumni Fellows  |  Staff Gabriella Clemente  Gabriella Clemente Mathematics Major City College Fellow Gabriella has a calling to understand how things work, capture the beauty of these processes, and express it. As a pure mathematics major, she is learning ways to give her calling—and her intuitions—a definite form. Gabriella is studying the time-independent Schrödinger equation in momentum representation with the purpose of developing an operator formalism, generalizing the method of solution of this equation for as many molecules as possible. As she works her way to the solution of this partial differential equation (PDE), she studies the subjects of functional, real and complex analysis as well as the theory of PDE, all of which are likely to dictate the focus of her forthcoming Ph.D. Gabriella also enjoys writing fiction, and reading and discussing philosophy.
2908ee63f2148c15
Quantum tunnelling: Wikis From Wikipedia, the free encyclopedia Quantum mechanics Uncertainty principle Introduction · Mathematical formulation Fundamental concepts Quantum state · Wave function Superposition · Entanglement Measurement · Uncertainty Exclusion · Duality Decoherence · Ehrenfest theorem · Tunnelling Quantum tunneling refers to the phenomena of a particle's ability to penetrate energy barriers within electronic structures. The scientific terms for this are Wave-mechanical tunneling, Quantum-mechanical tunneling and the Tunnel effect. The Tunnel Effect is an evanescent wave coupling effect that occurs in the context of quantum mechanics. Particles behave in a manner calculated with Schrödinger's wave-equations. All waves die away, but according to the laws of physics, the energy in these waves pass on. Wave coupling effects, mathematically equivalent to quantum tunnelling mechanics, can occur with Maxwell's wave-equation (both with light and with microwaves), and with the common non-dispersive wave-equation often applied (for example) to waves on strings and to acoustics. For these effects to occur there must be a situation where a thin region of "medium type 2" is sandwiched between two regions of "medium type 1", and the properties of these media have to be such that the wave equation has "traveling-wave" solutions in medium type 1, but "real exponential solutions" (rising and falling) in medium type 2. In optics, medium type 1 might be glass, medium type 2 might be a vacuum. In quantum mechanics, in connection with motion of a particle, medium type 1 is a region of space where the particle's total energy is greater than its potential energy, medium type 2 is a region of space (known as the "barrier") where the particle's total energy is less than its potential energy - for further explanation see the section on "Schrödinger equation - tunnelling basics" below. If conditions are right, amplitude from a traveling wave, incident on medium type 2 from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Depending on the wave equation being used, the leaked amplitude is interpreted physically as traveling energy or as a traveling particle, and, numerically, the ratio of the square of the leaked amplitude to the square of the incident amplitude gives the proportion of incident energy transmitted out the far side, or (in the case of the Schrödinger equation) the probability that the particle "tunnels" through the barrier. Schematic representation of quantum tunnelling through a barrier. The energy of the tunneled particle is the same, only the quantum amplitude (and hence the probability of the process) is decreased. The scale on which these "tunnelling-like phenomena" occur depends on the wavelength of the traveling wave. For electrons, the thickness of "medium type 2" (called in this context "the tunnelling barrier") is typically a few nanometres; for alpha-particles tunnelling out of a nucleus, the thickness is much less; for the analogous phenomenon involving light, the thickness is much greater. With the Schrödinger's wave-equation, the characteristic that defines the two media discussed above is the kinetic energy of the particle if it is considered as an object that could be located at a point. In medium type 1 the kinetic energy would be positive, in medium type 2 the kinetic energy would be negative. There is some inconsistency in this, because particles cannot physically be located at a point: they are always spread out ("delocalised") to some extent, and the kinetic energy of the delocalised object is always positive. What is true is that it is sometimes mathematically convenient to treat particles as behaving like points, particular in the context of Newton's Second Law and classical mechanics generally. In the past, people thought that the success of classical mechanics meant that particles could always and in all circumstances be treated as if they were located at points. But there never was any convincing experimental evidence that this was true when very small objects and very small distances are involved, and we now know that this viewpoint was mistaken. However, because it is still traditional to teach students early in their careers that particles behave like points, it sometimes comes as a big surprise for people to discover that it is well established that traveling physical particles always physically obey a wave-equation (even when it is convenient to use the mathematics of moving points). Clearly, a hypothetical classical point particle analysed according to Newton's Laws could not enter a region where its kinetic energy would be negative. But, a real delocalised object, that obeys a wave-equation and always has positive kinetic energy, can leak through such a region if conditions are right. An approach to tunnelling that avoids mention of the concept of "negative kinetic energy" is set out below in the section on "Schrödinger equation tunnelling basics". An electron approaching a barrier has to be represented as a wave-train. This wave-train can sometimes be quite long – electrons in some materials can be 10 to 20 nm long. This makes animations difficult. If it were legitimate to represent the electron by a short wave-train, then tunnelling could be represented as in the animation alongside. Reflection and tunnelling of an electron wavepacket directed at a potential barrier. The bright spot moving to the left is the reflected part of the wavepacket. A very dim spot can be seen moving to the right of the barrier. This is the small fraction of the wavepacket that tunnels through the classically forbidden barrier. Also notice the interference fringes between the incoming and reflected waves. It is sometimes said that tunnelling occurs only in quantum mechanics. Unfortunately, this statement is a bit of linguistic conjuring trick. As indicated above, "tunnelling-type" evanescent-wave phenomena occur in other contexts too. But, until recently, it has only been in quantum mechanics that evanescent wave coupling has been called "tunnelling". (However, there is an increasing tendency to use the label "tunnelling" in other contexts too, and the names "photon tunnelling" and "acoustic tunnelling" are now used in the research literature.) With regards to the mathematics of tunnelling, a special problem arises. For simple tunnelling-barrier models, such as the rectangular barrier, the Schrödinger equation can be solved exactly to give the value of the tunnelling probability (sometimes called the "transmission coefficient"). Calculations of this kind make the general physical nature of tunnelling clear. One would also like to be able to calculate exact tunnelling probabilities for barrier models that are physically more realistic. However, when appropriate mathematical descriptions of barriers are put into the Schrödinger equation, then the result is an awkward non-linear differential equation. Usually, the equation is of a type where it is known to be mathematically impossible in principle to solve the equation exactly in terms of the usual functions of mathematical physics, or in any other simple way. Mathematicians and mathematical physicists have been working on this problem since at least 1813, and have been able to develop special methods for solving equations of this kind approximately. In physics these are known as "semiclassical" or "quasiclassical" methods. A common semiclassical method is the so-called WKB approximation (also known as the "JWKB approximation"). The first known attempt to use such methods to solve a tunnelling problem in physics was made in 1928, in the context of field electron emission. It is sometimes considered that the first people to get the mathematics of applying this kind of approximation to tunnelling fully correct (and to give reasonable mathematical proof that they had done so) were N. Fröman and P.O. Fröman, in 1965. Their complex ideas have not yet made it into theoretical-physics textbooks, which tend to give simpler (but slightly more approximate) versions of the theory. An outline of one particular semiclassical method is given below. Three notes may be helpful. In general, students taking physics courses in quantum mechanics are presented with problems (such as the quantum mechanics of the hydrogen atom) for which exact mathematical solutions to the Schrödinger equation exist. Tunnelling through a realistic barrier is a reasonably basic physical phenomenon. So it is sometimes the first problem that students encounter where it is mathematically impossible in principle to solve the Schrödinger equation exactly in any simple way. Thus, it may also be the first occasion on which they encounter the "semiclassical-method" mathematics needed to solve the Schrödinger equation approximately for such problems. Not surprisingly, this mathematics is likely to be unfamiliar, and may feel "odd". Unfortunately, it also comes in several different variants, which doesn't help. Also, some accounts of tunnelling seem to be written from a philosophical viewpoint that a particle is "really" point-like, and just has wave-like behaviour. There is very little experimental evidence to support this viewpoint. A preferable philosophical viewpoint is that the particle is "really" delocalised and wave-like, and always exhibits wave-like behaviour, but that in some circumstances it is convenient to use the mathematics of moving points to describe its motion. This second viewpoint is used in this section. The precise nature of this wave-like behaviour is, however, a much deeper matter, beyond the scope of this article on tunnelling. Although the phenomenon under discussion here is usually called "quantum tunnelling" or "quantum-mechanical tunnelling", it is the wave-like aspects of particle behaviour that are important in tunnelling theory, rather than effects relating to the quantization of the particle's energy states. For this reason, some writers prefer to call the phenomenon "wave-mechanical tunnelling". By 1928, George Gamow had solved the theory of the alpha decay of a nucleus via tunnelling. Classically, the particle is confined to the nucleus because of the high energy requirement to escape the very strong potential. Under this system, it takes an enormous amount of energy to pull apart the nucleus. In quantum mechanics, however, there is a probability the particle can tunnel through the potential and escape. Gamow solved a model potential for the nucleus and derived a relationship between the half-life of the particle and the energy of the emission. Alpha decay via tunnelling was also solved concurrently by Ronald Gurney and Edward Condon. Shortly thereafter, both groups considered whether particles could also tunnel into the nucleus. After attending a seminar by Gamow, Max Born recognized the generality of quantum-mechanical tunnelling. He realized that the tunnelling phenomenon was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems. Today the theory of tunnelling is even applied to the early cosmology of the universe.[1] Quantum tunnelling was later applied to other situations, such as the cold emission of electrons, and perhaps most importantly semiconductor and superconductor physics. Phenomena such as field emission, important to flash memory, are explained by quantum tunnelling. Tunnelling is a source of major current leakage in Very-large-scale integration (VLSI) electronics, and results in the substantial power drain and heating effects that plague high-speed and mobile technology. Another major application is in electron-tunnelling microscopes (see scanning tunnelling microscope) which can resolve objects that are too small to see using conventional microscopes. Electron tunnelling microscopes overcome the limiting effects of conventional microscopes (optical aberrations, wavelength limitations) by scanning the surface of an object with tunnelling electrons. Quantum tunnelling has been shown to be a mechanism used by enzymes to enhance reaction rates. It has been demonstrated that enzymes use tunnelling to transfer both electrons and nuclei such as hydrogen and deuterium. It has even been shown, in the enzyme glucose oxidase, that oxygen nuclei can tunnel under physiological conditions.[2] Schrödinger equation - tunnelling basics Consider the time-independent Schrödinger equation for one particle, in one dimension. This can be written in the forms where \hbar is Planck's constant divided by 2 \pi, \; m is the particle mass, x represents distance measured in the direction of motion of the particle, Ψ(x) is the Schrödinger wave function, V(x) is the potential energy of the particle (measured relative to any convenient reference level), E is that part of the total energy of the particle that is associated with motion in the x-direction (measured relative to the same reference level as V(x)), and M(x) is a quantity defined by this equation. Explicitly, M(x) is given by M(x) = V(x) − E. The quantity M(x) has no accepted name in physics generally; the name "motive energy" is used in the article on field electron emission. The solutions of the Schrödinger equation take different forms for different values of x, depending on whether M(x) is positive or negative. This is easiest to understand if we consider a situation in which we have regions of space in which M(x) is (a) constant and negative and (b) constant and positive. When M(x) is constant and negative, then the Schrödinger equation can be written in the form The solutions of this equation represent travelling waves, with phase-constant +k or -k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written in the form The solutions of this equation are rising and falling exponentials, which take the form exp(+κx) for rising exponentials, or the form exp(-κx) for decaying exponentials (also called "evanescent waves"). When M(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive, but the parameters k and κ become functions of position. It follows that the sign of M(x) determines the "nature of the medium", with negative M corresponding to the "medium of type 1" discussed above, and positive M corresponding to the "medium of type 2". It thus follows from well-established mathematical principles of classical wave-physics - but applied to the Schrödinger equation - that evanescent wave coupling can occur if a region of positive M is sandwiched between two regions of negative M. This occurs if V(x) has a "hill-type" shape. A problem is that the mathematics of dealing with the situation where M(x) varies with x is intensely difficult, except in certain mathematical special cases that usually do not correspond quantitatively well to physical reality. A discussion of the simple (but quantitatively unrealistic) case of the rectangular potential barrier appears elsewhere. A discussion of the "semi-classical" approximate method, as sometimes found in physics textbooks, is given in the next section. A full (but very complicated) complete mathematical treatment appears in the 1965 monograph by Fröman and Fröman noted below. Their ideas have not yet made it into physics textbooks, but probably in most cases their corrections have little quantitative effect. A brief statement of the outcome of the Fröman and Fröman treatment appears in the article on field electron emission (which was the first major physical effect to be identified as due to electron tunnelling, in 1928), in the section on escape probability. Note that, in the hypothetical physical picture of "particle" motion used in the 1800s and earlier, in which a "particle" is assumed to have the behaviour of a moving point mass, positive values of M(x) correspond to negative values of the kinetic energy of a point mass located at position "x". There is, however, no logical need to introduce the concept of "negative kinetic energy at a point in space" into discussion of evanescent wave coupling (i.e., there is no logical need to introduce this concept into discussions of "tunnelling" based on the Schrödinger equation.) A semiclassical method for determining a formula for tunnelling probability Now let us recast the wave function Ψ(x) as the exponential of a function. \Psi(x) = e^{\Phi(x)} \, Now we separate Φ'(x) into real and imaginary parts using real valued functions A and B. \Phi'(x) = A(x) + i B(x) \, because the pure imaginary part needs to vanish due to the real-valued right-hand side: i\left(B'(x) - 2 A(x) B(x)\right) = 0. Next we want to take the semiclassical approximation to solve this. That means we expand each function as a power series in \hbar. From the equations we can see that the power series must start with at least an order of \hbar^{-1} to satisfy the real part of the equation. But as we want a good classical limit, we also want to start with as high a power of Planck's constant as possible. The constraints on the lowest order terms are as follows. A0(x)B0(x) = 0 If the amplitude varies slowly as compared to the phase, we set A0(x) = 0 and get which is only valid when you have more energy than potential - classical motion. After the same procedure on the next order of the expansion we get On the other hand, if the phase varies slowly as compared to the amplitude, we set B0(x) = 0 and get which is only valid when you have more potential than energy - tunnelling motion. Resolving the next order of the expansion yields It is apparent from the denominator, that both these approximate solutions are bad near the classical turning point E = V(x). What we have are the approximate solutions away from the potential hill and beneath the potential hill. Away from the potential hill, the particle acts similarly to a free wave - the phase is oscillating. Beneath the potential hill, the particle undergoes exponential changes in amplitude. In a specific tunnelling problem, we might suspect that the transition amplitude is proportional to e^{-\int dx \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}} and thus the tunnelling is exponentially dampened by large deviations from classically allowable motion. But to be complete we must find the approximate solutions everywhere and match coefficients to make a global approximate solution. We have yet to approximate the solution near the classical turning points E = V(x). Let us label a classical turning point x1. Now because we are near E = V(x1), we can expand \frac{2m}{\hbar^2}\left(V(x)-E\right) in a power series. Let us only approximate to linear order \frac{2m}{\hbar^2}\left(V(x)-E\right) = v_1 (x - x_1) This differential equation looks deceptively simple. Its solutions are Airy functions. Hopefully this solution should connect the far away and beneath solutions. Given the 2 coefficients on one side of the classical turning point, we should be able to determine the 2 coefficients on the other side of the classical turning point by using this local solution to connect them. We are able to find a relationship between C and C + ,C . Fortunately the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationship can be found as follows: Now we can construct global solutions and solve tunnelling problems. The transmission coefficient, \left| \frac{C_{\mbox{outgoing}}}{C_{\mbox{incoming}}} \right|^2, for a particle tunnelling through a single potential barrier is found to be Where x1,x2 are the 2 classical turning points for the potential barrier. If we take the classical limit of all other physical parameters much larger than Planck's constant, abbreviated as \hbar \rightarrow 0, we see that the transmission coefficient correctly goes to zero. This classical limit would have failed in the unphysical, but much simpler to solve, situation of a square potential. A related subject is above barrier reflection: in classical physics a particle will not reflect if its energy is above the potential barrier, but in the quantum case it is possible. In this case, the reflection coefficient is exponentially small in Planck constant. The semiclassical technique of calculation of the reflection coefficient is similar to the calculation of the tunnelling described above. Further reading External links Got something to say? Make a comment. Your name Your email address
c18aa00f48cd23d3
According to satellite-based measurements, rogue waves do not only exist, they are relatively frequent. If you have questions about how to cite anything on our website in your project or classroom presentation, please contact your teacher. At all." The design of the hatches only allowed for a static pressure of less than 2 metres (6.6 ft) of water or 17.1 kilopascals (1.59 kN/sq ft),[nb 3] meaning that the typhoon load on the hatches was more than ten times the design load. Rogue waves appear to be ubiquitous in nature and are not limited to the oceans. Even with sophisticated weather systems aboard, ocean racers' most unpredictable obstacles may be wind and waves. Finally, they observed that optical instruments such as the laser used for the Draupner wave might be somewhat confused by the spray at the top of the wave, if it broke, and this could lead to uncertainties of around 1–1.5 metres in the wave height. The use of a Gaussian form to model waves had been the sole basis of virtually every text on that topic for the past 100 years.[22][23][when? Consequently, the Maritime Court investigation concluded that the severe weather had somehow created an 'unusual event' that had led to the sinking of the München. During its route it has killed more than 15 people. [39], In addition fast moving waves are now known to also exert extremely high dynamic pressure. This was a scientific research vessel and was fitted with high quality instruments. I know it's just a nickname but Scoot? Read about our approach to external linking. ], The first known scientific article on "Freak waves" was written by Professor Laurence Draper in 1964. Suggested mechanisms for freak waves include the following: The spatio-temporal focusing seen in the NLS equation can also occur when the nonlinearity is removed. [114] unusually large wave not associated with a storm system or tsunami. (The only ships lost in the 2004 Asian tsunami were in port.) Oct. 28, 2020. Researchers have shown that the non-linear Schrödinger equation can explain how statistical models of ocean waves can suddenly grow to extreme heights, through this focusing of energy. There are a number of research programmes currently underway focussed on rogue waves including: Because the phenomenon of rogue waves is still a matter of active research, it is premature to state clearly what the most common causes are or whether they vary from place to place. If you account for the space-time effect properly, then the probability of encountering a rogue wave is larger. As the title says, it talks a lot about rogue waves. Students learn about the parts of a wave, wave height, and wavelength and then draw and label a wave. Scince it was so descriptive, we could draw a lot of conclusions form the text. This adventurous, exiting story called "The Rouge Wave" has a theme which was amazing, it impressed me. Well, what can I say? They then compared the strengths of these correlations with those they obtained when they randomly shuffled the segments. In 1826, French scientist and naval officer Captain Jules Dumont d'Urville reported waves as high as 108 ft (33 m) in the Indian Ocean with three colleagues as witnesses, yet he was publicly ridiculed by fellow scientist François Arago. Sustainability Policy |  “We were in a storm with 30-foot swells when a rogue wave over 50 feet high hit us, blowing out the windows of the bridge, blowing out the portholes in the galley, destroying the mast and splash rail, and flooding the engineer room with water. A handpicked selection of stories from BBC Capital, Culture, Earth Future and Travel, delivered to your inbox every Friday. For example, that rogue waves are very big and dangerous. Scoot was unconscious inside the kitchen. [39], There are more than 50 classification societies worldwide, each with different rules, although most new ships are built to the standards of the 12 members of the International Association of Classification Societies, which implemented two sets of Common Structural Rules; one for oil tankers and one for bulk carriers; in 2006. Text on this page is printable and can be used according to our Terms of Service. Their research created rogue wave holes on the water surface, in a water wave tank. But from about 60 degrees and greater, the wave began to break vertically upwards, creating a peak that did not reduce the wave height as usual, but instead increased it (a "vertical jet"). In a 2016 study, Fedele and his colleagues argued that more straightforward linear explanations can account for rogue waves after all. Tap here to turn on desktop notifications to get the news sent straight to you. Helen Fretter; April 12, 2017. "Non-linearities have a role, but it's a minor one," he says. The Norwegian offshore standards now take into account extreme severe wave conditions and require that a 10,000-year wave does not endanger the ships integrity. "The Rogue Wave" is a very interesting book because it creates suspense like in a Harry Potter movie. Thus acknowledgement of the existence of rogue waves (despite the fact that they cannot plausibly be explained by reference to simple statistical models) is a very modern scientific paradigm. On its way it killed 13 people, and it can be 48 ft. high. "We are now able to generate realistic rogue waves in the laboratory environment, in conditions which are similar to those in the oceans," says Chabchoub. "Satellite measurements have shown there are many more rogue waves in the oceans than linear theory predicts," says Amin Chabchoub of Aalto University in Finland. For anyone sitting on an isolated oil rig or ship, watching the swell of the waves under a stormy sky, those few minutes of warning could prove crucial. It’s free and takes less than 10 seconds! .THE SEA HAS ALWAYS BEEN A PLACE OF HIGH ADVENTURE--FASCINATING, MYSTERIOUS, DANGEROUS AND DEADLY. of a very different nature in characteristics as the surrounding waves in that sea state and with very low probability of occurrence (according to a Gaussian process description as valid for linear wave theory). One of the most famous shipwrecks of the 20th century, the Edmund Fitzgerald, was probably caused by at least one rogue wave on Lake Superior, part of the Great Lakes of North America.Both the 222-meter (729-foot) ship and its crew of 29 were lost. For such a quick-acting event, the story gets off to a rather slow start, full of nautical vocabulary that might put off young readers. This book is not yet featured on Listopia. One child died, one is still missing and the father, 47-year-old Jeremy Stiles, is recovering from hypothermia, according to The Oregonian. At 3 p.m. on 1 January 1995 it recorded a 26 m (85 ft) rogue wave i.e., 6 m [21 ft] taller than the predicted 10,000-year wave, that hit the rig at 72 km/h (45 mph). If waves met at an angle less than about 60 degrees, then the top of the wave "broke" sideways and downwards (a "plunging breaker"). Now you control the waves! A rogue wave is usually defined as a wave that is two times the significant wave height of the area. Really? A stand-out wave was detected with a wave height of 11 metres (36 ft) in a relatively low sea state. "The Rogue Wave" is a very interesting book because it creates suspense like in a Harry Potter movie. [33] A workshop of leading researchers in the world attended the first Rogue Waves 2000 workshop held in Brest in November 2000. The basic idea is that, when waves become unstable, they can grow quickly by "stealing" energy from each other. You know what they say, you should never judge a book by its cover. [19][20] Author Susan Casey wrote that much of that disbelief came because there were very few people who had seen a rogue wave, and until the advent of steel double-hulled ships of the 20th century "people who encountered 100-foot rogue waves generally weren't coming back to tell people about it."[21]. When this happens, the curved current narrowly focuses the wave’s energy, like an optical lens can powerfully focus light into a single beam.Dr. Freshwater Rogues Rogue waves can form in large bodies of freshwater as well as the ocean. This abiotic system is responsible for the transfer of heat, variations in biodiversity, and Earth’s climate system. A non-linear equation is one in which a change in output is not proportional to the change in input. However, others believe we could foresee rogue waves a little further ahead. "Rouge Wave" was an interesting story, but it lacked suspense and excitement. ©2020 Verizon Media. Further analysis of rogue waves using a fully nonlinear model by R. H. Gibbs (2005) brings this mode into question, as it is shown that a typical wavegroup focuses in such a way as to produce a significant wall of water, at the cost of a reduced height. Hillsborough County Police Codes, Bluishsquirrel Custom Steelbook, Thara Prashad Parents, Niacinamide And Salicylic Acid Together, Sea Level Specials, Salt Substitute Packets, Lancer Raps Lyrics, Steve Hewitt Moira Kelly, It Was Terribly Dangerous 1984 Page, Philodendron Seed Pod, Douglas Mattress Discount Code, Big Boi Net Worth 2020, Best Deer Hunting In Maine, Delta 747 Planetag, Tattoo Font Generator, Godfather Wedding Song, Battleship Movie Quotes, Libanon News Online, Is Manu Raju Still With Cnn, 5e Remove Curse, I Lost All My Guns In A Boating Accident Meaning, Mystify Lyrics Meaning, Wows Scharnhorst Build, Dante Basco Salary, Piccolo Game Online, Melvor Idle Chests, John Delorean Children, School Zones Nsw Dates 2020, Carly Pearce Age, Big House Landshipping For Sale, Lisa Jewell Books Ranked, Madeleine Favreau Instagram, John Ruiz Children, Wolf Skull Mask, 2017 Honda Pilot Jerks When Accelerating, Elizabeth Logue Hawaii, Kips Meaning In Engineering, Kramer Sg Guitar, Ryan Fitzpatrick Sat Score, Akbar The Great Quotes, Marlin Model 60 Squirrel Stock For Sale, Herald Obituaries Everett, Wa, Heron's Formula Worksheet, Matthew Axelson Rifle, Fake Puma Suede Vs Real, Trysilk Promo Code, Sufix Performance Braid Vs 832, Seether Truth Meaning, Reconstruction Essay Topics, Microsoft Teams Vs Discord, Hds Film Horreur, Frank Edmund Walton Everett, Jimmy Little Grandson, Peter Mahovlich Wife, Svengali Doll Wiki, Marine Corps Mess Night Rum Punch Recipe, Vixx Leo Military Discharge, Archibald The Koala, Comment Savoir Qui A Supprimer Un Message Sur Instagram, August 26 2020 Horoscope,
c0b49f433b3868e6
Encyclopedia … combined with a great Buyer's Guide! Sponsors:     and others Effective Nonlinear Coefficient Definition: a coefficient for quantifying the strength of a nonlinear interaction German: effektiver nichtlinearer Koeffizient Category: nonlinear optics How to cite the article; suggest additional literature There are various kinds of optical nonlinearities, the strength of which can depend both on material properties and various operation conditions. Often, one uses a kind of effective nonlinear coefficient to quantify such a strength. The following sections give some typical examples. Kerr Nonlinearity in an Optical Fiber One of the effects of the Kerr nonlinearity in an optical fiber is self-phase modulation, i.e., phase changes which are proportional to the optical intensity. In case of a single-mode fiber, the transverse intensity profile can usually be assumed to be determined by the refractive index profile of the fiber – with negligible influences of nonlinearities. The guided fiber mode has a certain effective mode area Aeff, which is defined such that the nonlinear phase change for light within some length L of a fiber is nonlinear phase shift containing the nonlinear index n2. One can introduce an effective nonlinear coefficient effective nonlinear coefficient such that the phase shift can be written as: nonlinear phase shift Such a nonlinear coefficient (which can also be called SPM coefficient) occurs e.g. in the nonlinear Schrödinger equation for the evolution of ultrashort pulses in a fiber. Its units are rad / (W m) (radians per Watt and meter). If an ultrashort pulse propagates through the fiber, the total nonlinear phase shift can be simply calculated as above, as long as the pulse duration is not substantially changed e.g. by the chromatic dispersion of the fiber. Otherwise, something like the nonlinear Schrödinger equation needs to be solved for that purpose. Crystals with χ(2) Nonlinearity Various nonlinear crystal materials are used for nonlinear frequency conversion processes such as frequency doubling, parametric amplification or optical rectification. For calculating quantities like the output power or the power conversion efficiency, one frequently uses equations containing an effective nonlinear coefficient deff with units of pm/V. The magnitude of that coefficient depends both on material properties and on the polarization properties of the interacting light beams. In simpler cases with noncritical phase matching, all involved light beams essentially propagate along one axis of the crystal, and the involved beams are typically linearly polarized either along another crystal axis or with an angle of 45° against such an axis. The effective nonlinearity can then relatively simply be calculated from the nonlinear tensor of the material. For example, for noncritically phase-matched frequency doubling in lithium niobate (LiNbO3), one usually has light propagation in X direction, with the pump light being polarized in Y direction and the harmonic light polarized in Z direction. The nonlinear polarization is nonlinear polarization in LiNbO3 with various tensor coefficients djk, but most of these tensor coefficients are irrelevant since (a) the pump field has only got a component EY while the harmonic field has only a component EZ, which can interact only with PZ. So we simply obtain nonlinear polarization in LiNbO3 and find that the effective nonlinear coefficient is deff = d31 in this example case. The same result would be obtained for light propagation in Y direction, where the pump light needs to be polarized in X direction. In cases with critical phase matching, where the beam direction is not simply along one of the crystal axes, the situation is more complicated. For a given phase-matching configuration, characterized by a phase-matching angle θ or φ, one often requires a formula involving one or more tensor coefficients and the angle. (The relative signs of tensor components may then be relevant.) The required formula depends on the crystal symmetry, which determines the structure of the nonlinear tensor. critical phase matching of LBO Figure 1: Phase-matching angle (red, left axis) and effective nonlinearity (blue, right axis) for critical phase matching of frequency doubling in LBO at room temperature, configuration oo-e in the XY plane. Figure 1 shows an example for critical phase matching of frequency doubling in lithium triborate (LBO). For different pump wavelength, phase matching requires different phase-matching angles, and these in turn influence the magnitude of the effective nonlinear coefficient. For pump wavelengths approaching ≈550 nm, the nonlinearity vanishes, so that the interaction is not usable, although it could still be phase-matched. In cases with quasi-phase matching, there is another factor 2 / π in the equation for the effective nonlinear coefficient. Its origin is that one does not have perfect phase matching in the material, i.e., periodic phase deviations to one side and the others during propagation, which effectively make the nonlinear infection somewhat weaker. Nevertheless, the achieved effective nonlinear coefficient is often substantially higher with quasi-phase matching than with birefringent phase matching, because one can utilize a higher tensor component (e.g. d33 instead of d31 in LiNbO3). Formulas for calculating or estimating optical powers of generated waves, for example, are not to be discussed here in detail. They often involve the square of the effective nonlinear coefficient, in addition to refractive indices and frequencies of the involved waves, beam radii etc. They are often based on certain assumptions, for example that the transverse intensity profiles remain essentially unchanged during propagation and that the conversion efficiency remains small. Results with more general validity (not requiring certain assumptions) can be obtained with numerical simulations. Since the strength of such nonlinear interactions usually depends not only on the effective nonlinear coefficient, but also on its refractive indices for the involved waves, for comparison of different nonlinear crystal materials one often uses a figure of merit like deff2 / (n1n2n3). As refractive indices can vary substantially between different crystal materials, they can have profound effects in such comparisons, which should not be ignored. Questions and Comments from Users Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest. Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him e.g. via e-mail. Your question or comment: Spam check:   (Please enter the sum of thirteen and three in the form of digits!) By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay. See also: nonlinearities, self-phase modulation, nonlinear crystal materials, nonlinear frequency conversion, nonlinear optics and other articles in the category nonlinear optics If you like this page, please share the link with your friends and colleagues, e.g. via social media: These sharing buttons are implemented in a privacy-friendly way!
746ab51cc8ea89b7
Remark: All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020. I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7]. Having said this, a large number of computational neuroscientists(even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question. Problem definition: Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required: Partial Derivative as a linear map: If the derivative of a differentiable function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ at $x_o \in \mathbb{R}^n$ is given by the Jacobian $\frac{\partial f}{\partial x} \Big\lvert_{x=x_o} \in \mathbb{R}^{m \times n}$, the partial derivative with respect to $i \in [n]$ is the ith column of $\frac{\partial f}{\partial x} \Big\lvert_{x=x_o}$ and may be computed using the ith standard basis vector $e_i$: \begin{equation} \frac{\partial{f}}{\partial{x_i}} \Big\lvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Big\lvert_{x=x_o} \tag{1} \end{equation} This is the general setting of numerical differentiation [3]. Partial Derivative as an operator: Within the setting of automatic differentiation [4], computer scientists construct algorithms $\nabla$ for computing the dual program $\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ which corresponds to an operator definition for the partial derivative with respect to the ith coordinate: \begin{equation} \nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2} \end{equation} \begin{equation} \nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i} \tag{3} \end{equation} Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation. The special case of classical mechanics: For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist? Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]? Koopman Von Neumann Classical Mechanics as a candidate solution: After reflecting upon the answers of Ben Crowell and gmvh, it appears that we require a formulation of classical mechanics where: 1. Everything is formulated in terms of linear operators. 2. All problems can then be recast in an algebraic language. After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore. Related problems: Furthermore, I think it may be worth considering the following related questions: 1. What would be left of mathematical physics if we could not compute partial derivatives? 2. Is it possible to accurately simulate any non-trivial physics without computing partial derivatives? 3. Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena? A historical note: It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight: Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion. Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11,12]. As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: paper, discussion. 1. William W. Symes. Partial Differential Equations of Mathematical Physics. 2012. 2. L.D. Landau & E.M. Lifshitz. Mechanics ( Volume 1 of A Course of Theoretical Physics ). Pergamon Press 1969. 3. Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4: 202–210. doi:10.1137/0704019. 4. Naumann, Uwe (2012). The Art of Differentiationg Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1. 5. Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group Department of Engineering Science University of Oxford. 2007. 6. Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010. 7. Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). https://doi.org/10.1038/s41593-019-0520-2 8. Wikipedia contributors. "Koopman–von Neumann classical mechanics." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020. 9. Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. doi:10.1073/pnas.17.5.315. PMC 1076052. PMID 16577368. 10. Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a Step Beyond. 2015. 11. Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019. 12. Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo,Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019. 13. Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. doi:10.1112/plms/s2-42.1.230. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical 14. ALBERT GIDON, TIMOTHY ADAM ZOLNIK, PAWEL FIDZINSKI, FELIX BOLDUAN, ATHANASIA PAPOUTSI, PANAYIOTA POIRAZI, MARTIN HOLTKAMP, IMRE VIDA, MATTHEW EVAN LARKUM. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science. 2020. • 2 $\begingroup$ How about the discrete equivalents to derivatives ,i.e., difference equations $\endgroup$ – Piyush Grover Mar 5 at 18:01 • 4 $\begingroup$ There are important discrete physical systems, for example quantum spin systems, which one can formulate without partial derivatives. In general, quantization probably helps - you base your description on an operator algebra, and nobody forces you to cast, e.g., $[x,p]=i$ in the position or momentum representations. Cf. the ladder operator formulation of the harmonic oscillator. $\endgroup$ – Michael Engelhardt Mar 5 at 18:15 • 12 $\begingroup$ I think the way you set up the question, the answer can only be No. (1) We know that mechanics can be formulated using derivatives. (2) Such models correctly reproduce the behavior of natural phenomena (excluding extreme regimes where quantum effects become important). (3) Any alternative formulation must reproduce the same behavior of natural phenomena. From your comments, it seems that (3) is all you require for your notion of equivalence. By that logic, (1) and (2) imply that any alternative formulation of mechanics will be equivalent to the one with derivatives. $\endgroup$ – Igor Khavkine Mar 5 at 20:18 • 5 $\begingroup$ @AidanRocke This wasn’t so much a counter-argument as a putative example of what you may want, or not. Basically integration by parts, or Stokes’ theorem, allows one to recast laws in an integral form avoiding derivatives although the setting is still “differential” geometry, sorry. For another example, recasting the nominally very “second-order” law that particles move on geodesics, see §§13–15 of Sternberg’s Einstein lecture General Covariance and the Passive Equations of Physics. $\endgroup$ – Francois Ziegler Mar 5 at 21:18 • 3 $\begingroup$ Would it be fair to interpret your question as: can one describe physics without differential calculus? $\endgroup$ – Michael Bächtold Mar 7 at 9:55 Yes. An example is the nuclear shell model as formulated by Maria Goeppert Mayer in the 1950's. (The same would also apply to, for example, the interacting boson model.) The way this type of shell model works is that you take a nucleus that is close to a closed shell in both neutrons and protons, and you treat it as an inert core with some number of particles and holes, e.g., $^{41}\text{K}$ (potassium-41) would be treated as one proton hole coupled to two neutrons. There is some vector space of possible states for these three particles, and there is a Hamiltonian that has to be diagonalized. When you diagonalize the Hamiltonian, you have a prediction of the energy levels of the nucleus. You do have to determine the matrix elements of the Hamiltonian in whatever basis you've chosen. There are various methods for estimating these. (They cannot be determined purely from the theory of quarks and gluons, at least not with the present state of the art.) In many cases, I think these estimates are actually done by some combination of theoretical estimation and empirical fitting of parameters to observed data. If you look at how practitioners have actually estimated them, I'm sure their notebooks do contain lots of calculus, including partial derivatives, or else they are recycling other people's results that were certainly not done in a world where nobody knew about partial derivatives. But that doesn't mean that they really require partial derivatives in order to find them. As an example, people often use a basis consisting of solutions to the position-space Schrodinger equation for the harmonic oscillator. This is a partial differential equation because it contains the kinetic energy operator, which is basically the Laplacian. But the reality is that the matrix elements of this operator can probably be found without ever explicitly writing down a wavefunction in the position basis and calculating a Laplacian. E.g., there are algebraic methods. And in any case many of the matrix elements in such models are simply fitted to the data. The interacting boson model (IBM) is probably an even purer example of this, although I know less about it. It's a purely algebraic model. Although its advocates claim that it is in some sense derivable as an approximation to a more fundamental model, I don't think anyone ever actually has succeeded in determining the IBM's parameters for a specific nucleus from first principles. The parameters are simply fitted to the data. Looking at this from a broader perspective, here is what I think is going on. If you ask a physicist how the laws of physics work, they will probably say that the laws of physics are all wave equations. Wave equations are partial differential equations. However, all of our physical theories except for general relativity fall under the umbrella of quantum mechanics, and quantum mechanics is perfectly linear. There is a no-go theorem by Gisin that says you basically can't get a sensible theory by adding a nonlinearity to quantum mechanics. Because of the perfect linearity, our physical theories can also just be described as exercises in linear algebra, and we can forget about a specific basis, such as the basis consisting of Dirac delta functions in position space. In terms of linear algebra, there is the problem of determining what is the Hamiltonian. If we don't have any systematic way of determining what is an appropriate Hamiltonian, then we get a theory that lacks predictive power. Even for a finite-dimensional space (such as the shell model), an $n$-dimensional space has $O(n^2)$ unknown matrix elements in its Hamiltonian. Determining these purely by fitting to experimental data would be a vacuous exercise, since typically the number of observations we have available is $O(n)$. One way to determine all these matrix elements is to require that the theory consist of solutions to some differential equation. But there is no edict from God that says this is the only way to do so. There are other methods, such as algebraic methods that exploit symmetries. This is the kind of thing that the models described above do, either partially or exclusively. Gisin, "Weinberg's non-linear quantum mechanics and supraluminal communications," http://dx.doi.org/10.1016/0375-9601(90)90786-N , Physics Letters A 143(1-2):1-2 | cite | improve this answer | | • 1 $\begingroup$ Arguably, linear algebra allows one to compute derivatives: given a rational function (or an analytic function as the limit of its Taylor series), you can evaluate it in the matrix argument $\begin{bmatrix}\lambda & 1 \\ 0 & \lambda\end{bmatrix}$, and the result you obtain is precisely $f(\begin{bmatrix}\lambda & 1 \\ 0 & \lambda\end{bmatrix}) = \begin{bmatrix}f(\lambda) & f'(\lambda) \\ 0 & f(\lambda)\end{bmatrix}$. This is, essentially, automatic differentiation recast as linear algebra. So matrix algebra is, essentially, equivalent to derivatives. $\endgroup$ – Federico Poloni Mar 7 at 9:57 • $\begingroup$ @FedericoPoloni: If I'm understanding you correctly, then you're assuming that the function has been expressed in the position basis. The point of my answer is that you can work with these models without ever even knowing any wavefunctions in the position basis. In the interacting boson model, nobody knows what the wavefunctions would be in the position basis. $\endgroup$ – Ben Crowell Mar 7 at 14:18 • $\begingroup$ No, that is a more general statement that is independent of applications or bases: if you allow matrix algebra among the things that you are allowed to do, then you can use it to compute the derivative of any function that you can compute. $\endgroup$ – Federico Poloni Mar 7 at 14:56 • 2 $\begingroup$ @FedericoPoloni - The irony is that the clever idea of automatic differentiation, and its putative realization through a biological system, become irrelevant if we formulate our physics problem such that it does not require any differentiation anymore, as the OP is asking us to do. That's where the OP lost me - the line of questioning seems completely self-defeating. (You probably had a similar reaction, going by some of your comments to the OP). $\endgroup$ – Michael Engelhardt Mar 7 at 15:08 • $\begingroup$ @MichaelEngelhardt Why would it be self-defeating? I consider automatic differentiation in biological systems to be the most likely scenario, given what we know, but as a scientist I think it is important to carefully consider the alternative possibility. $\endgroup$ – Aidan Rocke Mar 7 at 16:46 As to question 2, there are certainly plenty of non-trivial discrete models in statistical physics, such as the Ising or Potts models, or lattice gauge theories with discrete gauge groups, that require no partial derivatives (or indeed any operations of differential calculus) at all to formulate and simulate. Similarly, quantum mechanics can be formulated entirely in the operator formalism, and an entity incapable of considering derivatives could still contemplate the time-independent Schrödinger equation and solve it algebraically for the harmonic oscillator (using the number operator) or the hydrogen atom (using the Laplace-Runge-Lentz-Pauli vector operator). So an answer to question 1 might be "at least anything that can be written as a discrete-time Markov chain with a discrete state space, as well as anything that can be recast as an eigenvalue problem", and other problems that can be recast in purely probabilistic or algebraic language should also be safe (although it might be hard to come up with their formulations without using derivatives at some intermediate step). As to question 3, I personally don't believe that an approach to classical mechanics or field theory can be correct if it isn't equivalent (at least at a sufficiently high level of abstraction) to formulating and solving differential equations. But the level of abstraction could conceivably be quite high -- for an attempt to formulate classical mechanics without explicitly referring to numbers (!) cf. Hartry Field's philosophical treatise "Science without Numbers". | cite | improve this answer | | • 3 $\begingroup$ I believe Hartry Field avoids explicitly referring to numbers by assuming that physical space satisfies Hilbert's axioms for geometry, including the Archimedean and completeness axioms. From this one can derive a structure isomorphic to $\mathbb{R}$, so he actually does assume $\mathbb{R}$, implicitly. $\endgroup$ – John Stillwell Mar 6 at 11:14 • $\begingroup$ As I said, eventually you have to be able to describe differential equations and all of that (which of course includes having $\mathbb{R}$ at your disposal). And I agree that Hartry Field implicitly assumes (the consistency of) $\mathbb{R}$; as far as I can tell, his nominalism is ultimately more a matter of presentation. $\endgroup$ – gmvh Mar 6 at 12:46 • $\begingroup$ See my comment to another answer: linear algebra alone is, in some sense, also equivalent to derivatives. $\endgroup$ – Federico Poloni Mar 7 at 9:59 • $\begingroup$ After reflecting upon your answer, I wonder whether Koopman–von Neumann classical mechanics might be a candidate solution? Ref: en.wikipedia.org/wiki/… $\endgroup$ – Aidan Rocke Mar 7 at 21:08 • 1 $\begingroup$ I'm not familiar with KvN mechanics, but from the Wikipedia entry it doesn't really seem to meet your criteria -- note that the Liouville operator contains partial derivatives of the Hamiltonian function, and that simply putting those in as arbitrary operators won't work, since they would have to be related by the integrability condition on the gradient of the Hamiltonian. $\endgroup$ – gmvh Mar 9 at 15:08 Well if you take out partial derivatives, at least quantum field theory and in particular conformal field theory will survive the massacre. The reason is explained in my MO answer: $p$-adic numbers in physics One can use random/quantum fields $\phi:\mathbb{Q}_{p}^{d}\rightarrow \mathbb{R}$ as toy models of fields $\phi:\mathbb{R}^d\rightarrow\mathbb{R}$. In this $p$-adic or hierarchical setting, Laplacians and all that are nonlocal and not given by partial derivatives. Most equations in physics are local and therefore need partial derivatives in order to be formulated. What should remain, in the very hypothetical scenario proposed in the question, is everything pertaining to nonlocal phenomena. | cite | improve this answer | | I'd query the contention that organisms or even inorganic matter compute in the sense described. For example, if I drop a stone on the surface of the earth, it falls in a straight line. To call this as 'computing' a straight line seems rather a stretch of the word computation; to my thinking, to compute, means that one ought to be conscious that one is carrying out a computation. That is the person who dropped it is computing the straight line - and not the stone itself. It merely moves in a straight line. We know it moves in a straight line, and hence by dropping it, are describing a straight line. | cite | improve this answer | | • 1 $\begingroup$ This is an excellent answer, because it finally brings into focus the question of what we mean by computation. One way to think of it is that we, as humans, arrange two physical systems to behave in ways that can be mapped into each other: Say we are trying to predict what system A will do. If we can arrange system B to do the "same" thing, then by observing system B, we can predict A. B could be a traditional general purpose computer, but doesn't have to be. Now, there is no reason for us to hobble ourselves in performing the mapping by, say, outlawing derivatives ... $\endgroup$ – Michael Engelhardt Mar 8 at 15:01 • $\begingroup$ ... it may well be that our understanding of both systems A and B, and therefore the construction of the mapping necessary for computation, hinges on using derivatives, even if system B does not "perform derivatives" in the traditional general purpose computer sense. $\endgroup$ – Michael Engelhardt Mar 8 at 15:08 • $\begingroup$ You may be interested in the historical note that I added to the question as well as this paper on brain computation: igi-web.tugraz.at/PDF/LNCS-10000-Theories_006_v1.pdf $\endgroup$ – Aidan Rocke Mar 8 at 16:38 Your Answer
0b9512a505add85c
Wave functions and equations: a summary Post scriptum note added on 11 July 2016: This is one of the more speculative posts which led to my e-publication analyzing the wavefunction as an energy propagation. With the benefit of hindsight, I would recommend you to immediately the more recent exposé on the matter that is being presented here, which you can find by clicking on the provided link. In fact, I actually made some (small) mistakes when writing the post below. Original post: Schrödinger’s wave equation for spin-zero, spin-1/2, and spin-one particles in free space differ from each other by a factor two: 1. For particles with zero spin, we write: ∂ψ/∂t = i·(ħ/m)·∇2ψ. We get this by multiplying the ħ/(2m) factor in Schrödinger’s original wave equation – which applies to spin-1/2 particles (e.g. electrons) only – by two. Hence, the correction that needs to be made is very straightforward. 2. For fermions (spin-1/2 particles), Schrödinger’s equation is what it is: ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ. 3. For spin-1 particles (photons), we have ∂ψ/∂t = i·(2ħ/m)·∇2ψ, so here we multiply the ħ/m factor in Schrödinger’s wave equation for spin-zero particles by two, which amounts to multiplying Schrödinger’s original coefficient by four. Look at the coefficients carefully. It’s a strange succession: 1. The ħ/m factor (which is just the reciprocal of the mass measured in units of ħ) works for spin-0 particles. 2. For spin-1/2 particles, we take only half that factor: ħ/(2m) = (1/2)·(ħ/m). 3. For spin-1 particles, we double that factor: 2ħ/m = 2·(ħ/m). I describe the detail on my Deep Blue page, so please go there for more detail. What I did there, can be summarized as follows: • The spin-one particle is the photon, and we derived the photon wavefunction from Maxwell’s equations in free space, and found that it solves the ∂ψ/∂t = i·(2ħ/m)·∇2ψ equation, not the ∂ψ/∂t = i·(ħ/m)·∇2ψ or ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ equations. • As for the spin-zero particles, we simplified the analysis by assuming our particle had zero rest mass, and we found that we were basically modeling an energy flow. • The analysis for spin-1/2 particles is just the standard analysis you’ll find in textbooks. We can speculate how things would look like for spin-3/2 particles, or for spin-2 particles, but let’s not do that here. In any case, we will come back to this. Let’s first focus on the more familiar terrain, i.e. the wave equation for spin-1/2 particles, such as protons or electrons. [A proton is not elementary – as it consists of quarks – but it is a spin-1/2 particle, i.e. a fermion.] The phase and group velocity of the wavefunction for spin-1/2 particles (fermions) We’ll start with the very beginning of it all, i.e. the two equations that the young Comte Louis de Broglie presented in his 1924 PhD thesis, which give us the temporal and spatial frequency of the wavefunction, i.e. the ω and k in the θ = ω·t − k·t argument  of the a·ei·θ wavefunction: 1. ω = E/ħ 2. k = p/ħ This allows to calculate the phase velocity of the wavefunction: This is an elementary wavefunction, several of which we would add with appropriate coefficients, with uncertainty in the energy and momentum ensuring our component waves have different frequencies, and, therefore, the concept of a group velocity does not apply. In effect, the a·ei·θ wavefunction does not describe a localized particle: the probability to find it somewhere is the same everywhere. We may want to think of our wavefunction being confined to some narrow band in space, with us having no prior information about the probability density function, and, therefore, we assume a uniform distribution. Assuming our box in space is defined by Δx = x2 − x1, and imposing the normalization condition (all probabilities have to add up to one), we find that the following logic should hold: (Δx)·a2 = (x2−x1a= 1 ⇔ Δx = 1/a2 However, we are, of course, interested in the group velocity, as the group velocity should correspond to the classical velocity of the particle. The group velocity of a composite wave is given by the vg = ∂ω/∂k formula. Of course, that formula assumes an unambiguous relation between the temporal and spatial frequency of the component waves, which we may want to denote as ωn and kn, with n = 1, 2, 3,… However, we will not use the index as the context makes it quite clear what we are talking about. The relation between ωn and kn is known as the dispersion relation, and one particularly nice way to calculate ω as a function of k is to distinguish the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as a pair of two equations: 1. Re(∂ψB/∂t) =   −[ħ/(2m)]·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2m)]·cos(kx − ωt) 2. Im(∂ψB/∂t) = [ħ/(2m)]·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2m)]·sin(kx − ωt) Both equations imply the following dispersion relation: ω = ħ·k2/(2m) We can now calculate vg = ∂ω/∂k as: vg = ∂ω/∂k = ∂[ħ·k2/(2m)]/∂k = 2ħk/(2m) = ħ·(p/ħ)/m = p/m = m·v/m = v That’s nice, because it’s what we wanted to find. If the group velocity would not equal the classical velocity of our particle, then our model would not make sense. We used the classical momentum formula in our calculation above: p = m·v. To calculate the phase velocity of our wavefunction, we need to calculate that E/p ratio and, hence, we need an energy formula. Here we have a lot of choice, as energy can be defined in many ways: is it rest energy, potential energy, or kinetic energy? At this point, I need to remind you of the basic concepts. The argument of the wavefunction as the proper time It is obvious that the energy concept that is to be used in the ω = E/ħ is the total energy. Louis de Broglie himself noted that the energy of a particle consisted of three parts: 1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint): it includes the rest mass of the ‘internal pieces’, as de Broglie put it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’ interaction energy); 2. Any potential energy (V) it may have because of some field (so de Broglie was not assuming the particle was traveling in free space): the field(s) can be anything—gravitational, electromagnetic—you name it: whatever changes the energy because of the position of the particle; 3. The particle’s kinetic energy, which he wrote in terms of its momentum p: K.E. = m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m). So the wavefunction, as de Broglie wrote it, can be written as follows: ψ(θ) = ψ(x, t) = a·eiθ = a·e−i[(Eint + p2/(2m) + V)·t − p∙x]/ħ  This formula allows us to analyze interesting phenomena such as the tunneling effect and, hence, you may want to stop here and start playing with it. However, you should note that the kinetic energy formula that is used here is non-relativistic. The relativistically correct energy formula is E = mvc, and the relativistically correct formula for the kinetic energy is the difference between the total energy and the rest energy: K.E. = E − E0 = mv·c2 − m0·c2 = m0·γ·c2 − m0·c2 = m0·c2·(γ − 1), with γ the Lorentz factor. At this point, we should simplify our calculations by adopting natural units, so as to ensure the numerical value of = 1, and likewise for ħ. Hence, we assume all is described in Planck units, but please note that the physical dimensions of our variables do not change when adopting natural units: time is time, energy is energy, etcetera. But when using natural units, the E = mvc2 reduces to E = mv. As for our formula for the momentum, this formula remains p = mv·v, but is now some relative velocity, i.e. a fraction between 0 and 1. We can now re-write θ = (E/ħ)·t – (p/ħ)·x as: θ = E·t – p·x = E·t − p·v·t = mv·t − mv·v·v·t = mv·(1 − v2)·t We can also write this as: ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2)∙x) = a·ei·m0·(t − v∙x)/√(1−v2) The (t − v∙x)/√(1−v2) factor in the argument is the proper time of the particle as given by  the formulas for the Lorentz transformation of spacetime: However, both the θ = mv·(1 − v2)·t and θ = m0·t’ = m0·(t − v∙x)/√(1−v2) are relativistically correct. Note that the rest mass of the particle (m0) acts as a scaling factor as we multiply it with the proper time: a higher m0 gives the wavefunction a higher density, in time as well as in space. Let’s go back to our vp = E/p formula. Using natural units, it becomes: vp = E/p = mv/mv·v = 1/v Interesting! The phase velocity is the reciprocal of the classical velocity! This implies it is always superluminal, ranging from vp = ∞ to vp= 1 for going from 0 to 1 = c, as illustrated in the simple graph below. phase velocity Let me note something here, as you may also want to use the dispersion relation, i.e. ω = ħ·k2/(2m), to calculate the phase velocity. You’d write: vp = ω/k = [ħ·k2/(2m)]/k = ħ·k/(2m) = ħ·(p/ħ)/(2m) = m·v/(2m) = v/2 That’s a nonsensical result. Why do we get it? Because we are mixing two different mass concepts here: the mass that’s associated with the component wave, and the mass that’s associated with the composite wave. Think of it. That’s where Schrödinger’s equation is different from all of the other diffusion equations you’ve seen: the mass factor in the ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ equation is the mass of the particle that’s being represented by the wavefunction that solves the equation. Hence, the diffusion constant ħ/(2m) is not a property of the medium. In that sense, it’s different from the κ/k factor in the ∂T/∂t = (κ/k)·∇2T heat diffusion, for example. We don’t have a medium here and, therefore, Schrödinger’s equation and the associated wavefunction are intimately connected. It’s an interesting point, because if we’re going to be measuring the mass as multiples of ħ/2 (as suggested by the ħ/(2m) = 1/[m/[ħ/2)] factor itself), then its possible values (for ħ = 1) will be 1/2, 1, 3/2, 2, 5/2,… Now that should remind you of a few things—things like harmonics, or allowable spin values, or… Well… So many things. 🙂 Let’s do the exercise for bosons now. The phase and group velocity of the  wavefunction for spin-0 particles My Deep Blue page explains why we need to drop the 1/2 factor in Schrödinger’s equation to make it fit the wavefunction for bosons. We distinguished two bosons: (1) the (theoretical) zero-mass particle (which has spin zero), and the (actual) photon (which has spin one). Let’s first do the analysis for the spin-zero particle. • A zero-mass particle (i.e. a particle with zero rest mass) should be traveling at the speed of light: both its phase as well as its group velocity should be equal to = 1. In fact, we’re not talking composite wavefunctions here, so there’s no such thing as a group velocity. We’re not adding waves: there is only one wavefunction. [Note that we don’t need to add waves with different frequencies in order to localize our particle, because quantum mechanics and relativity theory come together here in what might well be the most logical and absurd conclusion ever: as an outside observer, we’re going to see all those zero-mass particles as point objects whizzing by because of the relativistic length contraction. So their wavefunction is only all over spacetime in their proper space and time, but not in ours!] • Now, it’s easy to show that, if we choose our time and distance units such that c = 1, then the energy formula reduces to E = m∙c2 = m. Likewise, we find that p = m∙c = m. So we have this strange condition: E = p = m. • Now, this is not consistent with the ω = ħ·k2/(2m) we get out of the ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ equation, because E/ħ = ħ·(p/ħ)2/(2m) ⇔ E = m2/(2m) = m/2. That does not fit the E = p = m condition. The only way out is to drop the 1/2 factor, i.e. to multiply Schrödinger’s coefficient with 2. Let’s quickly check if it does the trick. We assume E, p and m will be multiples of ħ/2 (E = p = m = n·(ħ/2), so the wavefunction is ei∙[t − x]n·/2, Schrödinger’s constant becomes 2/n, and the derivatives for ∂ψ/∂t = i·(ħ/m)·∇2ψ are: • ∂ψ/∂t = −i·(n/2)·ei∙[t − x]·n/2 • 2ψ = ∂2[ei∙[t − x]·n/2]/∂x= i·(n/2)·∂[ei∙[t − x]·n/2]/∂x = −(n2/4)·ei∙[t − x]·n/2 So the Schrödinger equation becomes: i·(n/2)·ei∙[t − x]n·/2) = −i·(2/n)·(n2/4)·ei∙[t − x]·n/2 ⇔  n/2 = n/2 ⇔ 1 = 1 As Feynman would say, it works like a charm, and note that n does not have to be some integer to make this work. So what makes spin-1/2 particles different? The answer is: they have both linear as well as angular momentum, and the equipartition theorem tells us the energy will be shared equally among both , so they will pick up linear and angular momentum. Hence, the associated condition is not E = p = m, but E = p = 2m. We’ll come back to this. Let’s now summarize how it works for spin-one particles The phase and group velocity of the  wavefunction for spin-1 particles (photons) Because of the particularities that characterize an electromagnetic wave, the wavefunction packs two waves, capturing both the electric as well as the magnetic field vector (i.e. E and B). For the detail, I’ll refer you to the mentioned page, because the proof is rather lengthy (but easy to follow, so please do check it out). I will just briefly summarize the logic here. 1. For the spin-zero particle, we measured E, m and p in units of – or as multiples of – the ħ/2 factor. Hence, the elementary wavefunction (i.e. the wavefunction for E = p = m = 1) for the zero-mass particle is ei(x/2 − t/2). 2. For the spin-1 particle (the photon), one can show that we get two of these elementary wavefunctions (ψand ψB), and one can then prove that we can write the sum of the electric and magnetic field vector as: E + BE + B = ψ+ ψ= E + i·E = √2·ei(x/2 − t/2+ π/4) = √2·ei(π/4)·ei(x/2 − t/2) = √2·ei(π/4)·= √2·ei(π/4)·ei(x/2 − t/2) Hence, the photon has a special wavefunction. Does it solve the Schrödinger equation? It does when we use the 2ħ/m diffusion constant, rather than the ħ/m or ħ/(2m) coefficient. Let us quickly check it. The derivatives are: • ∂ψ/∂t = −√2·ei(π/4)·ei∙[t − x]/2·(i/2) • 2ψ = ∂2[√2·ei(π/4)·ei∙[t − x]/2]/∂x= √2·ei(π/4)·∂[ei∙[t − x]/2·(i/2)]/∂x = −√2·ei(π/4)·ei∙[t − x]/2·(1/4) Note, however, that we have two mass, energy and momentum concepts here: EE, pE, mE and EB, pB, and mB respectively. Hence, if E= p= mE = E= p= mB = 1/2, then E = E+ EB, p = p+ pB and m = m+ mare all equal to 1. Hence, because E = p = m = 1 and we measure in units of ħ, the 2ħ/m factor is equal to 2 and, therefore, the modified Schrödinger’s equation ∂ψ/∂t = i·(2ħ/m)·∇2ψ becomes: i·√2·ei(π/4)·ei∙[t − x]/2·(1/2) = −i·√2·2·ei(π/4)·ei∙[t − x]/2·(1/4) ⇔ 1/2 = 2/4 = 1/2 It all works out. Let’s quickly check it for E, m and p being multiples of ħ, so we write: E = p = m = n·ħ = n, so the wavefunction is √2·ei(π/4)·ei∙[t − x]n·/2, Schrödinger’s 2ħ/m constant becomes 2ħ/m = 2/n, and the derivatives for ∂ψ/∂t = i·(ħ/m)·∇2ψ are: • ∂ψ/∂t = −i·(n/2)·√2·ei(π/4)·ei∙[t − x]·n/2 • 2ψ = ∂2[ei∙[t − x]·n/2]/∂x= i·√2·ei(π/4)·(n/2)·∂[ei∙[t − x]·n/2]/∂x = −√2·(n2/4)·ei(π/4)·ei∙[t − x]·n/2 So the Schrödinger equation becomes: i·√2·ei(π/4)·(n/2)·ei∙[t − x]·n/2) = −i·√2·ei(π/4)·(2/n)·(n2/4)·ei∙[t − x]·n/2 ⇔  n/2 = n/2 ⇔ 1 = 1 It works like a charm again. Note the subtlety of the difference between the ħ/(2m) and 2ħ/m factor: it depends on us measuring the mass (and, hence, the energy and momentum as well) in units of ħ/2 (for spin-0 particles) or, alternatively (for spin-1 particles), in units of ħ. This is very deep—but it does make sense in light of the En =n·ħ·ω = n·h·f formula that solves the black-body radiation problem, as illustrated below. [The formula next to the energy levels is the probability of an atomic oscillator occupying that energy level, which is given by Boltzmann’s Law. You can check things in my post on it.] energy levels It is now time to look at something else. Schrödinger’s equation as an energy propagation mechanism The Schrödinger equations above are not complete. The complete equation includes force fields, i.e. potential energy: schrodinger 5 To write the equation like this, we need to move the on the right-hand side of our ∂ψ/∂t = i·(2ħ/m)·∇2ψ equation to the other side, and multiply both sides with −1. [Remember: 1/i = −i.] Now, it is very interesting to do a dimensional analysis of this equation. Let’s do the right-hand side first. The ħfactor in the ħ/(2m) is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. So we get: (J2·s2)·[(m/s)2/J] = J·m2. But so we multiply that with some quantity (the Laplacian) that’s expressed per m2. So −(ħ2/2m)·∇2ψ is something expressed in joule, so it’s some amount of energy! Interesting, isn’t it? [Note that it works out fine with the addition Vψ term, which is also expressed in joule.] On the left-hand side, we have ħ, and its dimension is the action dimension: J·s, i.e. force times distance times time (N·m·s). So we multiply that with a time derivative and we get J once again, the unit of energy. So it works out: we have joule units both left and right. But what does it mean? Well… The Laplacian on the right-hand side works just the same as for our heat diffusion equation: it gives us a flux density, i.e. something expressed per square meter (1/m2). Likewise, the time derivative on the left-hand side gives us a flow per second. But so what is it that is flowing here? Well… My interpretation is that it is energy, and it’s flowing between a real and an imaginary space—but don’t be fooled by the terms here: both spaces are equally real, as both have an actual physical dimension. Let me explain. Things become somewhat more comprehensible when we remind ourselves that the Schrödinger equation is equivalent to the following pair of equations: 1. Re(∂ψ/∂t) =   −(ħ/2m)·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt) 2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt) So what? Let me insert an illustration here. See what happens. The wavefunction acts as a link function between our physical spacetime and some other space whose dimensions – in my humble opinion – are also physical. We have those sines and cosines, which mirror the energy of the system at any point in time, as measured by the proper time of the system. Let me more precise. The wavefunction, as a link function between two spaces here, associates every point in spacetime with some real as well as some imaginary energy here—but, as mentioned above, that imaginary energy is as real as the real energy. What it embodies really is the energy conservation law: at any point in time (as measured by the proper time) the sum of kinetic and potential energy must be equal to some constant, and so that’s what’s shown here. Indeed, you should note the phase shift between the sine and the cosine function: if one reaches the +1 or −1 value, then the other function reaches the zero point—and vice versa. It’s a beautiful structure. Of course, the million-dollar question is: is it a physical structure, or a mathematical structure? The answer is: it’s a bit of both. It’s a mathematical structure but, at the same time, its dimension is physical: it’s an energy space. It’s that energy that explains why amplitudes interfere—which, as you know, is what they do. So these amplitudes are something real, and as the dimensional analysis of Schrödinger’s equation reveals their dimension is expressed in joule, then… Well… Then these physical equations say what they say, don’t they? And what they say, is something like the diagram below. summary 2 Note that the diagram above does not show the phase difference between the two springs. The animation below does a better job here, although you need to realize the hand of the clock will move faster or slower as our object travels through force fields and accelerates or decelerates accordingly. We may relate that picture above to the principle of least action, which ensures that the difference between the kinetic energy (KE) and potential energy (PE) in the integrand of the action integral, i.e. is minimized along the path of travel. The spring metaphor should also make us think of the energy formula for a harmonic oscillator, which tells us that the total energy – kinetic (i.e. the energy related to its momentum) plus potential (i.e. the energy stored in the spring) – is equal to T + U = m·ω02/2. The ωhere is the angular velocity, and we have two springs here, so the total energy would be the sum of both, i.e. m·ω02, without the 1/2 factor. Does that make sense? It’s like an E = m·vequation, so that’s twice the (non-relativistic) kinetic energy. Does that formula make any sense? In the context of what we’re discussing here, it does. Think about the limit situation by trying to imagine a zero-mass particle here (I am talking a zero-mass spin-1/2 particle this time). It would have no rest energy, so it’s only energy is kinetic, which is equal to: K.E. = E − E0 = mv·c2 − m0·c2 = mc·c2 Why is mequal to mc? Zero-mass particles must travel at the speed of light, as the slightest force on them gives them an infinite acceleration. So there we are: the m·ω02 equation makes sense! But what if we have a non-zero rest mass? In that case, look at that pair of equations again: they give us a dispersion relation, i.e. a relation between ω and k. Indeed, using natural units again, so the numerical value of ħ = 1, we can write: ω = k2/(2m) = p2/(2m) = (m·v)2/(2m) = m·v2/2 This equation seems to represent the kinetic energy but m is not the rest mass here: it’s the relativistic mass, so that makes it different from the classical kinetic energy formula (K.E. = m0·v2/2). [It may be useful here to remind you of how we get that classical formula. We basically integrate force over distance, from some start to some final point of a path in spacetime. So we write: ∫ F·ds = ∫ (m·a)·ds = ∫ (m·a)·ds = ∫ [m·(dv/dt)]·ds = ∫ [m·(ds/dt)]·d= ∫ m·v·ds. So we can solve that using the m·v2/2 primitive but only if m does not vary, i.e. if m = m0. If velocity are high, we need the relativistic mass concept.] So we have a new energy concept here: m·v2, and it’s split over those two springs. Hmm… The interpretation of all of this is not so easy, so I will need to re-visit this. As for now, however, it looks like the Universe can be represented by a V-twin engine! 🙂 V-Twin engine Is it real? You may still doubt whether that new ‘space’ has an actual energy dimension. It’s a figment of our mind, right? Well… Yes and no. Again, it’s a bit of a mixture between a mathematical and a physical space: it’s definitely not our physical space, as it’s not the spacetime we’re living in. But, having said that, I don’t think this energy space is just a figment of our mind. Let me give you some additional reasons, beside the dimensional analysis we did above. For example, there is the fact that we need to to take the absolute square of the wavefunction to get the probability that our elementary particle is actually right there! Now that’s something real! Hence, let me say a few more things about that. The absolute square gets rid of the time factor. Just write it out to see what happens: |reiθ|2 = |r|2|eiθ|2 = r2[√(cos2θ + sin2θ)]2 = r2(√1)2 = r2 Now, the gives us the maximum amplitude (sorry for the mix of terminology here: I am just talking the wave amplitude here – i.e. the classical concept of an amplitude – not the quantum-mechanical concept of a probability amplitude). Now, we know that the energy of a wave – anywave, really – is proportional to the amplitude of a wave. It would also be logical to expect that the probability of finding our particle at some point x is proportional to the energy densitythere, isn’t it? [I know what you’ll say now: you’re squaring the amplitude, so if the dimension of its square is energy, then its own dimension must be the square root, right? No. Wrong. That’s why this confusion between amplitude and probability amplitude is so bad. Look at the formula: we’re squaring the sine and cosine, to then take the square root again, so the dimension doesn’t change: it’s √J2 = J.] The third reason why I think the probability amplitude represents some energy is that its real and imaginary part also interfere with each other, as is evident when you take the ordinary square (i.e. not the absolute square). Then the i2   = –1 rule comes into play and, therefore, the square of the imaginary part starts messing with the square of the real part. Just write it out: (reiθ)2 = r2(cosθ + isinθ)2 = r2(cos2θ – sin2θ + 2icosθsinθ)2 = r2(1 – 2sin2θ + 2icosθsinθ)2  As mentioned above, if there’s interference, then something is happening, and so then we’re talking something real. Hence, the real and imaginary part of the wavefunction must have some dimension, and not just any dimension: it must be energy, as that’s the currency of the Universe, so to speak. Let me add a philosophical note here—or an ontological note, I should say. When you think we should only have one physical space, you’re right. This new physical space, in which we relate energy to time, is not our physical space. It’s not reality—as we know, as we experience it. So, in that sense, you’re right. It’s not physical space. But then… Well… It’s a definitional matter. Any space whose dimensions are physical, is a physical space for me. But then I should probably be more careful. What we have here is some kind of projection of our physical space to a space that  lacks… Well… It lacks the spatial dimension. It’s just time – but a special kind of time: relativistic proper time – and energy—albeit energy in two dimensions, so to speak. So… What can I say? Just what I said a couple of times already: it’s some kind of mixture between a physical and mathematical space. But then… Well… Our own physical space – including the spatial dimension – is something like a mixture as well, isn’t it? We can try to disentangle them – which is what I am trying to do – but we’ll never fully succeed.
6bb2b43adde42e84
Dycon Logo Kolmogorov equation PDF version…  |   Download Code…  |   DOI 1 Introduction We are interested in the numerical discretization of the Kolmogorov equation [12] (1)   \begin{equation*} \left\{ \begin{array}{lll} \partial_t f - \mu \partial_{xx} f - v(x) \partial_y f =0, & (x,y)\in\R^2, t>0,\\ f(x,y,0) =f_0(x,y), & (x,y)\in\R^2 \end{array} \right. \end{equation*} where \mu>0 is a diffusive function and v a potential function. This is one example of degenerate advection-diffusion equations which have the property of hypo-ellipticity (see for instance, [6, 13, 14]), ensuring the C^\infty regularity of solutions for t>0 ([6]). In the present case, the generator of the semigroup is constituted by the superposition of operators \mu \partial_{xx} and v(x) \partial_y. Despite the presence of a first order term, that could lead to transport phenomena and, consequently, to the lack of smoothing, the regularizing effect is ensured by the fact that the commutator of these two operators is non-trivial, allowing to gain regularity in the variable y. A full characterization of hypo-ellipticity can be found in [6]. Solutions of (1) experience also decay properties as t\to \infty. This is also a manifestation of hypo-coercivity (in the sense developed by Villani [13], [14]) as a byproduct of the hidden interaction of the two operators entering in the generator of the semigroup. In this particular case \mu=1 and v(x)=x, using the Fourier transform, the fundamental solution of (1) (starting from an initial Dirac mass \delta_{(x_0, y_0)}) can be computed explicitly getting the following anisotropic Gaussian kernel (2)   \begin{equation*} K_{(x_0, y_0)}(x,y, t) = \frac{1}{3\pi^2 t^2} \exp \bigg[{-\frac{1}{\pi^2}\left( \frac{3|y-(y_0+tx_0)|^2}{t^3}+ \frac{3(y-(y_0+tx_0)) (x-x_0)}{t^2} + \frac{|x-x_0|^2}t\right )}\bigg] \end{equation**} which exhibits different diffusivity and decay scales in the variables x and y. In view of the structure of the fundamental solution, one can deduce the following decay rates: (3)   \begin{equation*} \| f(t)\|_{L^2}+ \sqrt t\, \| \partial_x f(t) \|_{L^2}+ t^{\frac 32} \|\partial_y f(t)\| _{L^2}\leq C ||f_0||_{L^2} \end{equation*} for solutions with initial data f_0 in L^2. Similar decay properties can be predicted by scaling arguments, due to the invariance properties of the equation in (1). These decay properties are of anisotropic nature and of a different rate in the x and y-directions. Indeed, in the x-direction, as in the classical heat equation, we observe a decay rate of the order of t^{-1/2}, while, in the y-variable, the decay is of order t^{-3/2}. The obtention of these decay properties by energy methods has been a challenging topic of particular interest when dealing with more general convection-diffusion models that do not allow the explicit computation of the kernel. In this effort, the asymptotic behavior of Kolmogorov equation and several other relevant kinetic models was investigated intensively through the concept and techniques of hypo-coercivity, which allow to make explicit the hidden diffusivity and dissipativity of the involved operators (see [13], [14] and the previous references therein). The literature on the asymptotic behaviour of models related with Kolmogorov equation is huge. We refer for instance to [8], [9], [2] for earlier works, and to [4], [5] for more recent approaches. Roughly speaking, it is by now well known that, constructing well-adapted Lyapunov functionals through variations of the natural energy of the system, one can make the dissipativity properties of the semigroup emerge and then obtain the sharp decay rates. These techniques have been developed also in other contexts such as partially dissipative hyperbolic systems (see [1]). In [10] Porretta and Zuazua introduces a numerical scheme that preserves this hypo-coercivity property at the numerical level, uniformly on the mesh-size parameters. The issue is relevant from a computational point of view since, as it has been observed in a number of contexts (wave propagation, dispersivity of Schrödinger equations, conservation laws, etc. [15], [7]), the convergence property in the classical sense of numerical analysis (a property that concerns finite-time horizons) is not sufficient to ensure the asymptotic behavior of the PDE solutions to be captured correctly. The fact that the numerical approximation schemes preserve the decay properties of continuous solutions can be considered as a manifestation of the property of numerical hypo-coercivity. In [3] Foster et al introduces a numerical scheme which preserves the long time behavior of solutions to the Kolmogorov equation. The method presented is based on a self-similar change of variables technique to transform the Kolmogorov equation into a new form, such that the problem of designing structure preserving schemes, for the original equation, amounts to building a standard scheme for the transformed equation. We also present an analysis for the operator splitting technique for the self-similar method and numerical results for the described scheme. Here, instead of, we investigate this behavior using the characteristics-Galerkin finite element method (trough Freefem++ [11]) and in particular, we confront the results to those obtained in [3]. 2 Description of the numerical scheme At the numerical level, we employ a finite element method based on characteristics-Galerkin technique, and for the sake of simplicity and ease, we use the Freefem ++ software ([11]). As described above, solution of Equation (1) does not only diffuses in the direction of x, by the effect of the diffusion operator \partial_{xx} f, but it is also diffuses in the direction of y, due to the transport equation \partial_t f - v(x) \partial_y f. We will treat both effects, transport and diffusion separately, using the characteristics method, that we recall hereafter, for the equation \partial_t f - v(x) \partial_y f and linear or quadratic finite element to discretise the diffusion term. 2.1 Transport Let us consider the following scalar two dimensional transport equation (4)   \begin{equation*} \partial_t f + \bm c \cdot \nabla f = g, \quad \bm c \in\R^2 \textrm{ in } \Omega \subset\R^2 \times (0,T) \end{equation*} for some function g. Let (x,y,t)\in \mathbb{R}^2 \times \mathbb{R}^+. This transport equation can be written using the total derivative (5)   \begin{equation*} \frac{d}{ds} f(\bm X_{x,y,t}(s),s) = g \end{equation*} if and only if the curve (\bm X_{x,y,t}(s),s) satisfies the system of ordinary differential equation (6)   \begin{equation*} \left\{ \begin{array}{ll} \frac{d}{ds}\bm X_{x,y,t}(s) = \bm c(\bm X_{x,y,t}(s),s),& \forall s\in(0,t), \\ \bm X_{x,y,t}(t) = (x,y) \\ \end{array}\right. \end{equation*} Under suitable assumptions on \bm c, the problem is well defined and there exists a unique solution to (6) \bm X_{x,y,t}, called the characteristic curve reaching (or passing from) the point (x,y) at time t. Since we cannot compute explicitly, in general, the solution of the equation (6), hence (4), we look for an approximate solution. Noting \delta t>0 the time step and t_{n+1} = t_n + \delta t, an easy manner to approximate the solution of Equation (4) is to perform a backward convection by the method of characteristic (7)   \begin{equation*} \frac{1}{\delta t} \left(f^{n+1}(x,y)-f^{n}(\bm X_{x,y,t_n}(x))\right) = g^n(x,y) \end{equation*} where f^n(x,y) = f(x,y,t_n) and \bm X_{x,y,t_n}(x) is an approximation, as shown below, of the solution at time t_n=n \delta t of the ordinary differential equation (6) for s\in(t_n,t_{n+1}) with the final data \bm X_{x,y,t}(t_{n+1}) = (x,y). Assuming f regular enough, by Taylor expansion, one can write     \[f^n(\bm X_{x,y,t}(t_{n}))=f^n(\bm X_{x,y,t}(t_{n+1})) - \delta t \ \bm c(\bm X_{x,y,t_n}(t_n),t_n) \cdot \nabla f^n(x) + O(\delta t^2)\] Applying also a Taylor expansion to the function t\mapsto f^n((x,y)-t \bm c(\bm X_{x,y,t_n}(t_n),t_n)), we get and therefore one can approximate f^n(\bm X_{x,y,t}(t_{n})) by f^n((x,y)- \delta t \ \bm c(\bm X_{x,y,t_n}(t_n),t_n)). For the sake of clarity, in the sequel, we note X(t) the characteristic curve passing through the point (x,y) at time t. 2.2 Numerical algorithm For numerical purpose, we consider Equation (1) in \Omega \subset \mathbb{R}^2 with homogeneous Neumann boundary conditions. Keeping in mind the characteristic method, Equation (1) can be written (8)   \begin{equation*} \left\{ \begin{array}{lll} \frac{d}{dt} f(\bm X(t)) - \mbox{div}(A \nabla f) =0, & (x,y)\in\Omega, t\in (0,T), T>0\\ A \nabla f \cdot \bm n =0, & \textrm{ in } \partial\Omega\\ f(x,y,0) =f_0(x,y), & (x,y)\in\Omega \end{array} \right. \end{equation*} where \bm n stands for the outward unit normal to \Omega, and for all s\in(0,t), \bm X is the solution of (9)   \begin{equation*} \left\{ \begin{array}{ll} \frac{d}{ds}\bm X(s) = \bm v(\bm X_{x,y,t}(s)),& \forall s\in(0,t), \\ \bm X(t) = (x,y) \ .\\ \end{array}\right. \end{equation*} Here, we use the following notations \bm v = \left(\begin{array}{c} 0\\-v \end{array}\right) and A = \left(\begin{array}{cc} \mu& 0\\0&0 \end{array}\right). Formally, thus, one can write, for any \varphi \in V for some functional space, the weak form of Equation (8) as follows (10)   \begin{equation*} \int_{\Omega} \frac{d}{dt} f(X(t)) \varphi \ dx dy + \int_{\Omega} A \nabla f \cdot \nabla \varphi \ dx dy = 0 \end{equation*} Let us denote t_0<t_1<\ldots < t_M = T be the discrete time with t_n = n \delta t where \delta t denotes the time step. We set M = T/\delta t. Using the method of characteristic for the total derivative (see section 2.1), the weak form (10) can be approximated by     \begin{equation*} \int_{\Omega} \frac{1}{\delta t}\left(f^{n+1}-f^n \circ \bm X^n \right)\varphi \ dx dy + \int_{\Omega} A \nabla f^{n+1} \cdot \nabla \varphi \ dx dy = 0 \end{equation*} (11)   \begin{equation*} a(f^{n+1},\varphi) = (f^n,\varphi) \end{equation*}     \[a(f,\varphi) = ((I/ \delta t+A\nabla)f,\varphi) \ .\] Here (\cdot,\cdot) is the inner product in L^2(\Omega). Therefore, denoting \tau_h a partition of \Omega by triangles and V_h the P_k-finite element space (of degree k), the weak discrete form of the problem (8) is Find \{f_h^n\}_{n=1}^{M=T/\delta t} \subset V_h such that for n=1,\dots, M,     \[a(f_h^n,\varphi_h) = (f^n,\varphi_h), \ \forall \varphi_h \in V_h \ .\] The Freefem++ script corresponding to the problem may as follows Freefem++ CODE: 1. Freefem++ code for prog_control.m: C C C This program solve the Kolmogorov equation C C f_t-mu*f_{xx}-v(x) f_y = 0 on Omega x [0,T] C C with free boundary conditions C C using the Characteristic-Galerkin Finite Element Method C C f(i,j) ==> unknown scalar function C C phi(i,j) ==> test function C C v(i) ==> scalar potential function C //Omega : square mesh [0,20]x[0,20] real aa = 10; real x0=-aa,x1=aa; real y0=-aa,y1=aa; int m = 100; mesh Th=square(m,m,[x0+(x1-x0)*x,y0+(y1-y0)*y]); real Tf = 10, dt = 0.01, mu = 1; // viscosity parameter (see equation above) fespace Vh(Th,P2); // P1 linear finite element Vh f0 = exp(-x^2-y^2), // initial data phi, // test function for(real t=0;t<=Tf;t=t+dt) Vh c=convect([0,v(x,y)],-dt,f0); solve Kolmogorov(f,phi) int2d(Th)(f*phi/dt +mu*(dx(f)*dx(phi))) - int2d(Th)(c/dt*phi) 3 Numerical experiment In this section we present a test case, [3], for which we confront to an exact solution of the Kolmogorov Equation (1) with \mu=1 and v(x)=x. In particular, we compare our results to the one obtained in [3]. For our numerical test case, we have used linear finite element. The initial value problem (1) with the initial data f_0(x,y) = \exp(-x^2-y^2) admits the following exact solution     \[f_{ex}(x,y,t) = \frac{\exp\left(-\frac{(( 3+3t^2 +4t^3)x^2 +6t(1+2t)xy+3(1+4t)y^2)}{(3+12t+4t^3 +4t^4)}\right)}{\sqrt{1 + 4 t + 4/3 t^2 + 4/3 t^4}} \ .\] As done in [3], for each numerical tests, we have considered the time interval, and problem domain to be respectively [0, T=10] and \Omega = [−10, 10] \times [−10,10], The time step is kept constant equal to \delta t = 0.01 and the number of triangles along each side of the domain is given by m=50, 100 and 150. As one can see in the video above, the support for the function grows beyond the problem domain in the given time interval and interact with boundary conditions. This interaction, since we do not use here transparent boundary conditions, increases the error as one can also observe in Figure 1. We also show the time evolution of \|f(\cdot,t)-f_{ex}(\cdot,t)\|_2, \|\partial_xf(\cdot,t)\|_2, \|\partial_y f(\cdot,t)\|_2 and     \[D(t)=\left(\| f(t)\|_{L^2}+ \sqrt t\, \| \partial_x f(t) \|_{L^2}+ t^{\frac 32} \|\partial_y f(t)\| _{L^2}\right)/||f_0||_{L^2} \ ,\] in Figure 2. The L_2 error at time time T=10 is approximately 0.0072 as one can see in Figure 2(a). Moreover, the error \int_0^T \| f(\cdot,t)-f_{ex}(\cdot,t)\|_2 dt is approximately of order 0.02. We also observe, due to the interaction with the boundary conditions that the errors increase sensibly for each numerical experiments approximately at time t\approx 8.5. Therefore, in order to compute the numerical order of convergence, we have computed for each m, m\mapsto \max_t\left( \|f(\cdot,t)-f_{ex}(\cdot,t)\|_2\right). We find almost the order 1 which is satisfactory. Finally, we have computed the quantity for which we numerically show that the constant for the decay rates is C=1 as shown in Figure 2(d) Movie 1: Numerical simulation of the test case . Freefem solution with m=100 Figure 1.a: Numerical and exact solution at time T=10, Freefem solution with m=100 Exact solution Figure 1.b: Numerical and exact solution at time T=10, Exact solution L_2 errors Figure 2.a: L_2 errors, t\mapsto\|f(\cdot,t)-f_{ex}(\cdot,t)\|_2 L_2 errors Figure 2.b: L_2 errors, t\mapsto \|\partial_xf(\cdot,t)\|_2 L_2 errors Figure 2.c: L_2 errors, t\mapsto \|\partial_yf(\cdot,t)\|_2 L_2 errors Figure 2.d: L_2 errors,t\mapsto D(t) To end, we present a last numerical simulation of rotating and moving initial data. Movie 2: Numerical simulation on \Omega = [0,10]\times[0,20], [0,T=0.6] with \mu=10^{-3}, v(x)=-x and f_0(x,y) = 20 \exp(-(y-15)^2) \exp(-0.1(x-10)^2). [1] K. Beauchard and E. Zuazua, Sharp large time asymptotics for partially dissipative hyperbolic systems, Arch. Ration. Mech. Anal. 199 (2011) 177-7227. [2] A. Carpio, Long-time behavior for solutions of the Vlasov-Poisson-Fokker-Planck equation, Mathematical methods in the applied sciences 21 (1998), 985-1014. [3] E. L. Foster, J. Lohéac and M.-B. Tran, A Structure Preserving Scheme for the Kolmogorov Equation, preprint 2014 (arXiv:1411.1019v3). [4] F. Hérau, Short and long time behavior of the Fokker-Planck equation in a confining potential and applications, J. Funct. Anal. 244 (2007), 95-118. [5] F. Hérau and F. Nier, Isotropic hypoellipticity and trend to equilibrium for the Fokker-Planck equation with a high-degree potential, Arch. Ration. Mech. Anal. 171 (2004), 151-218. [6] L. Höormander, Hypoelliptic second order differential equations, Acta Math. 119 (1967), 147-171. [7] L. Ignat, A. Pozo and E. Zuazua, Large-time asymptotics, vanishing viscosity and numerics for 1-D scalar conservation laws, Math of Computation, to appear. [8] A. M. Il’in, On a class of ultraparabolic equations, Soviet Math. Dokl. 5 (1964), 1673-1676. [9] A. M. Il’in and R. Z. Kasminsky, On the equations of Brownian motion, Theory Probab. Appl. 9 (1964), 421-444. [10] A. Porretta and E. Zuazua, Numerical hypocoercivity for the Kolmogorov equation, Mathematics of Computation 86.303 (2017): 97-119. [11] Frédéric Hecht, Olivier Pironneau, A~Le~Hyaric and K~Ohtsuka, Freefem++ manual, 2005. [12] A. Kolmogoroff, Zufallige bewegungen (zur theorie der brownschen bewegung) Annals of Mathematics, pages 116–117, 1934. [13] C. Villani, Hypocoercivity, Mem. Amer. Math. Soc. 202 (2009). [14] C. Villani, Hypocoercive diffusion operators, In International Congress of Mathematicians, Vol. III, 473-498. Eur. Math. Soc. Zürich, 2006. [15] E. Zuazua, Propagation, observation, and control of waves approximated by finite difference methods, SIAM Review, 47 (2) (2005), 197-243. Authors: Mehmet Ersoy, Enrique Zuazua November, 2017
218788515413fe37
Aneesur Rahman Prize for ETH-Zurich professor Matthias Troyer Matthias Troyer, a professor of computational physics at ETH Zurich, has received the Aneesur Rahman Prize 2016 for outstanding achievements in his field. As the Selection Committee stated, Troyer was honoured for his ground-breaking work in many “seemingly intractable areas” of so-called many-body quantum physics and for providing efficient, sophisticated computer codes for the scientific community. The prize is one of the few that are awarded in the field of computational physics. November 25, 2015 - by Simone Ulmer ETH professor Matthias Troyer has received the Aneesur Rahman Prize 2016. (Photo: Giulia Marthaler, ETH Zurich) Professor Troyer, what does this award mean to you? The prize is a wonderful acknowledgement of my group’s achievements in this area of physics, which is still very much in its infancy. In physics, it is traditionally new discoveries in theory and experimentation that are honoured. The Rahman Prize is one of the few prizes in computational physics. It recognises achievements in the development of new methods to solve physical problems and the simulation of difficult problems on computers. The award shows that we are one of the world’s leading groups in this field. How did you end up in computational physics? Through my interest in supercomputing and physics. Even at high school I was interested in both science and computing. Back then, I thought “I know how to program but I don’t understand science well enough” and that is why I studied physics. For my diploma thesis I had the opportunity to do a project on a Cray X-MP supercomputer – and that’s how I got to do physics with state-of-the-art supercomputers like we have at CSCS today. So, what is many-body quantum physics, and why is it so hard to solve? We’ve known the Schrödinger equation, which describes how materials behave, for nearly a century. While it is a simple equation, a macroscopic solid state is composed of many particles, electrons and atomic nuclei. The problem is not finding the equations that describe a solid state, but solving them. For one particle, the Schrödinger equation is a partial differential equation in three dimensions, which is much simpler than similar equations for fluid flows as they appear in simulations of climate and weather. To describe the behaviour of many electrons, however, one needs to solve equations in 3N dimensions. For a million particles, this is a differential equation in three million dimensions. Due to the curse of dimensionality, this is an enormous task. What are the “seemingly intractable areas of quantum many-body physics” that the Selection Committee touched upon? Because of the huge number of dimensions, it is difficult to solve these many-particle problems. It isn’t enough to simply wait for faster and larger supercomputers but one needs to have new ideas and new approaches to this problem. The power of supercomputers has grown exponentially in the last twenty or thirty years, but we have made even greater progress in the development of new algorithms. The software algorithms you use to solve particular problems? Exactly. But you can’t afford to limit yourself to just developing algorithms. We develop algorithms, implement them in software, optimise them for supercomputers and finally use them to solve problems. By combining new algorithms, good software and supercomputers, we can make progress and solve problems that nobody could solve before. While we cannot solve every problem, we have made a lot of progress in certain areas. Can you give an example? We studied phase transitions in quantum systems, ultra-cold quantum gases and so-called supersolids – materials that are simultaneously perfect crystals and liquids that flow without friction. Moreover, we developed new methods for simulations of correlated electrons, where we managed to improve the performance of the algorithms by a factor of 100,000. These kinds of improvements enable one to tackle new problems that wouldn’t have been possible in the past. The Prize Committee explicitly mentions your work on increasing the efficiency of software.  We publish much of our software within the scope of the ALPS project, which stands for “Algorithms and Libraries for Physics Simulations”, and make it open source, so that everyone can use it. But you don’t just optimise software; you also use it to research the aforementioned phenomena.  Indeed. And besides that, we also test and simulate a new class of computers known as quantum computers, and develop algorithms for future such systems. We are already thinking about the types of problems that we might be able to solve with state-of-the-art algorithms when quantum computers become available. We are most interested in problems that we can’t solve classically, not even with the fastest supercomputer that we’ll have in twenty years. Are you already cooperating with manufacturers in this regard?  Yes, we are collaborating with various companies that are interested in quantum computers, especially Microsoft. We currently simulate materials for quantum bits and develop new applications for quantum computers. Did this motivation come through D-Wave, which launched the first apparent quantum computer? Our interest in quantum computing predates D-Wave. The fact that D-Wave built a quantum annealer heightened the interest of companies and government organisations and as a result more resources are now available. Half of my group is working on topics related to quantum computing.  However, the devices built by D-Wave are not the quantum computers we are thinking of but are rather special purpose devices for solving particular optimization problems. When will we have a “real” quantum computer? In the next five years we will be able to produce devices that can outperform conventional supercomputers for specific physics problems. While this might not be able to solve problems of general interest, it will still demonstrate that one can compute something that is regarded as difficult. Within twenty years I expect that we can build quantum computers that will solve certain applications of wider interest more quickly and effectively than a classic supercomputer. But that doesn’t mean that this quantum computer will be suitable for all applications, does it? A quantum computer will always be a special purpose high-performance computer. Even conventional supercomputers are already niche products that are only needed for certain applications. For most people, a smartphone or PC is sufficient. While quantum computers are able to calculate anything that classic computers can, they will demonstrate their specific strength in a narrow range of really difficult problems. We want to find out for which problems this is the case and what we can solve better than with a classic computer. Does that mean we don’t really know which problems we hope to be able to solve with a quantum computer yet? There are lots of ideas of how to apply them to fundamental science problems. However, given the considerable resources that the development of a quantum computer will require, companies investing in quantum computing naturally ask about their application potential. Consequently, we are already developing and optimising algorithms for quantum computers in order to demonstrate that it will be possible to solve certain problems on quantum computers better than on classical ones. We already know several applications in the field of cryptography, quantum chemistry and materials science and are on the lookout for others in new areas. Is it easy to find people who are working in this field? That is not a problem at all! Many excellent students are interested in quantum computing. Even if D-Wave’s products are controversial, the company made the field of quantum computing extremely popular and inspired businesses and students to do it better. Will you invest your prize money in the development of quantum computing? That would be less than a drop in the ocean, but of course the prize will help further our research in the field both directly and indirectly. Matthias Troyer Matthias Troyer studied in Linz, Austria and ETH Zurich, Switzerland where he received his diploma in physics in 1991 and his doctorate in 1994. After a postdoctoral year at ETHZ he spent three years as postdoc at the University of Tokyo before returning to ETHZ initially as lecturer and since 2005 as full professor of computational physics. Working at the interface between physics and computational science he has made contributions to quantum phase transitions in quantum magnets, supersolidity of bosons, strongly correlated electrons, ultracold quantum gases and the development of simulation algorithms for quantum many-body systems. To make modern simulation methods accessible to a broader community he initiated the open-source ALPS project. His interest in advanced high-performance computing systems has recently led him towards the testing and development of quantum devices and on the optimization of quantum algorithms. Troyer won a gold medal at the International Chemistry Olympiad in 1986, received the ETH Medal for his doctoral thesis in 1994, and was awarded an ERC Advanced Grant in 2012. He is a Fellow of the American Physical Society and currently serves as Member and Trustee of the Aspen Center for Physics. Aneesur Rahman Prize The prize is presented annually to recognize and encourage outstanding achievement in computational physics research. It  consists of $10,000, an allowance for travel to the meeting of the Society at which the prize is awarded and at which the recipient will deliver the Rahman Lecture, and a certificate citing the contributions made by the recipient.
992d8eb27818880e
23 August 2017 Electrical control simulation of near infrared emission in SOI-MOSFET quantum well devices Author Affiliations + J. of Nanophotonics, 11(3), 036016 (2017). doi:10.1117/1.JNP.11.036016 In the race to realize ultrahigh-speed processors, silicon photonics research is part of the efforts. Overcoming the silicon indirect bandgap with special geometry, we developed a concept of a metal–oxide–semiconductor field-effect transistor, based on a silicon quantum well structure that enables control of light emission. This quantum well consists of a recessed ultrathin silicon layer, obtained by a gate-recessed channel and limited between two oxide layers. The device’s coupled optical and electrical properties have been simulated for channel thicknesses, varying from 2 to 9 nm. The results show that this device can emit near infrared radiation in the 1 to Bendayan, Sabo, Zolberg, Mandelbaum, Chelly, and Karsenty: Electrical control simulation of near infrared emission in SOI-MOSFET quantum well devices The need for higher processing speed imposes a great technological challenge since reducing the internal distance between the processor transistors increases RF interference phenomena due to the electron-motion-induced electric field in the internal communication path. It is commonly accepted that one of the best ways to overcome these interference problems is to use optical communication instead of electrical ones since photons do not interact between them.12.3 Therefore, great efforts are conducted to obtain light-emitting Si-based devices as building blocks of integrated optical and electrical processing.45.6.7 Unfortunately, Si has an indirect bandgap that prevents electron recombination light emission.8 This is why the technology of electro-optic devices is based mainly on direct gap III–V semiconductors, such as GaAs. However, even if GaAs high-quality devices already exist, they still remain a great technological challenge to combine both Si- and GaAs-based blocks in the same processor chip since it is difficult to grow high-quality GaAs layers on an Si substrate.9 From the early times of microelectronic industries, microprocessors had been developed using silicon as the starting wafer material. This technology has reached high maturity, and processors can be made at very large scale production and at low cost. Therefore, it is highly preferable to find a way to obtain optical emission from a silicon-based device rather than to combine Si- and GaAs-based devices since it seems that the future forecast is well oriented to silicon photonics.45.6,10 Despite silicon’s indirect bandgap nature, which makes it a very poor light emitter,11 not only has electroluminescence been observed in ultrathin silicon,12 but photo-activated silicon-based devices as well as light-emitting devices have also been recently developed.13,14,15 This renewed interest in silicon photonics has top high-tech corporations investing more and more efforts into developing these kinds of technologies.16 If electro-luminescence (EL) from silicon metal-oxide-semiconductor (MOS) devices can be used for testing integrated circuits (ICs),1718.19.20 practical silicon-emitting devices are still difficult to realize. Moreover, some methods developed to obtain silicon-based photoemission devices are not efficient enough or may induce some unacceptable device degradation through hot electrons’ injection mechanism, causing threshold voltage shift, thermal emission, device lifetime reduction, and more. Therefore, it is believed that the most efficient way to obtain Si-based light-emitting devices is to use the intraband electron recombination in a quantum well. Our recent research aims at a silicon-on-insulator (SOI)-metal–oxide–semiconductor field-effect transistor (MOSFET) device called MOSQWELL (MOSFET Quantum Well) for which the light emission can be expected from intra-sub band electron recombination in the silicon quantum well.21 In the past, some works described the influence of very thin layers in MOSFET devices22,23 as well as quantum-well-based photo detectors.24 Design and simulation of the present devices were conducted using the advanced COMSOL Multiphysics software package. The simulations describe the optical emission spectra in the 1 to 2  μm domain (near infrared), which is the relevant optical communication band. In the present paper, these spectra will be presented as a function of the drain voltages, showing dependence between the light emission and the electric properties of the transistor, which is a necessary step toward optical communication. Device Simulation Model Device Structure We have conducted simulations on devices with channel thicknesses varying from 2 to 9 nm. Such a thin channel can be achieved from commercially available SOI layer (50 nm Si thickness in our case) wafers using a selective “gate-recessed” channel (GRC). As an example, a 4-nm thick channel device obtained by this method is presented in Fig. 1. Along with the three-dimensional (3-D) structure of the different layers shown in Fig. 1(a), a zoomed view of the GRC, included in two oxide layers (which is the quantum well structure), is presented in Fig. 1(b). The color legend enables the identification of the different material layers of the device: monocrystalline silicon in yellow, polysilicon in green, silicon oxide (both gate oxide and buried oxide) in red, silicon nitride in cyan, and aluminum contacts in blue. Fig. 1 Description of the MOSQWELL device structure using COMSOL multiphysics package: (a) 3-D structure. The color legend presents the different material layers: silicon (yellow), polysilicon (green), silicon oxide (red), silicon nitride (cyan), and aluminum (blue). (b) Zoomed view of the recessed channel (4-nm thickness). SOI layer is 50 nm. Units are in nm. Need of an Accurate Structure and Mesh COMSOL software is based on finite elements calculation. Therefore, we defined the device using a high-density mesh of vertices. The high density of the mesh is needed because of the ultrathin channel layer. Because of computing and memory limitations, we had to vary the mesh density along the device so that the software will not crash and the simulation will run in reasonable time. The mesh model is shown in Fig. 2. It can be seen that the large structures are described with relatively few vertices, whereas much more vertices are needed to faithfully describe the thin layers. Fig. 2 Complex mesh for the COMSOL simulation of the MOSQWELL device’s structure: (a) 3-D view of the dense mesh and (b) zoomed view of the device channel with more dense regions in the upper layers. Physical Model Limitations Though the calculations were conducted under the semiconductor module package of COMSOL, we had to adapt the module to take into account the expected quantum effects in the ultrathin transistor channel, which can be described as a quantum well. The COMSOL software package enables postprocessing variables improved for spontaneous emission.25 These new postprocessing variables have been added, enabling the spontaneous emission spectrum to be plotted as a function of photon energy, wavelength, and frequency. Additionally, it is now possible to directly access the photon energy, wavelength, and frequency variables throughout the extra dimension that is added by the optical transitions feature, where previously these quantities needed to be calculated using an expression in terms of the angular frequency. Previous works also showed the usage of COMSOL modules and Schrödinger–Poisson equation model solving.2627.28.29 Indeed a two-dimensional (2-D) Poisson–Schrödinger solver is needed. It should be capable of producing a potential profile and of calculating the Eigen energy at any cross section between the drain and the source. It is of course required to obtain various sub band profiles from the drain to the source for calculating the drain current by the mode-space approach, where the transmission coefficient needs to be calculated for different sub band profiles from the drain to the source.27 By this means, we were able to calculate the light-emission intensity dependence on the drain voltage. To take into account the expected quantum effects in the ultrathin transistor channel, which can be described as a quantum well, we had to use the module in the following way. We defined the channel as a virtual semiconductor having electronic property values taken from silicon. The gaps between the inter-sub band levels were set equal to the energy level transition found from the quantum well model described in the following section. Therefore, we could not calculate the whole emission spectrum from the device, and we had to conduct the simulation for each emission line (radiative transition) separately for a given drain and gate voltages. By this means, we were able to calculate the light emission intensity dependence on the drain voltage for the whole emission spectrum. In spite of some drawbacks, the simulation will give a coherent dependence of the light emission on the drain voltage that actually determines the electron concentration along the channel and, therefore, the electric field and the electron states in the channel. Quantum Well Model Emission Energies and Corresponding Wavelengths As a result of the momentum conservation law, the indirect bandgap nature of silicon prevents the radiative electron hole recombination.30 The minimum conduction band energy level is near the Brillouin zone edge and, therefore, has high momentum while the highest valence band energy level is at the Brillouin zone center. Since the photon having energy equal to the difference between these two energy levels should have a negligible momentum, radiative recombination due to interband charge transitions are forbidden in normal conditions. The main interest of the quantum well effect is to shift all the electron transitions at the minimum conduction band energy. The quantum effect will induce some energy sub bands at the same Brillouin zone point; therefore, electron sub band transitions will be allowed, and photon emission will be obtained.8 In the MOSQWELL transistor, the well is the n-doped Si channel between the buried oxide layer and the gate oxide layer [Fig. 1(b)] with respective thicknesses of 75 and 25 nm. It should be emphasized that the same Si layer serves as the recessed channel and the gates below the electrical contacts where the layer thickness is different: 4 nm for the channel and 50 nm for the source (S) and drain (D) contacts. While quantum effects are expected in the channel, the silicon thickness in the drain and source regions is large enough so that quantum effects are negligible. Let us consider that the silicon layer is confined in the z direction (the growth direction) between the SiO2 layers and the layer has variable width in the x direction. In the y direction, the layer can be considered infinite. We can assume that the electron motion in the x direction is free, although there is a small potential step at the interfaces between the gate and the channel. Therefore, the electrons behave as a 2-D free electron gas in the xy plane. The quantum-well-induced energy levels E are then obtained by solving the one-dimensional (1-D) Schrödinger equation: where V(z) is the potential energy in the confined (z) direction, mz is the effective mass of the electron, and ψ(z) is the electron wave function. The energy band diagram of the quantum channel is described in Fig. 3. The SiO2 bandgap is 8.9 eV,31 and the energy difference between the conduction band of the Si well and the SiO2 barrier (the barrier height) is 3.3 eV.31 We assume that the electron effective mass in the SiO2 barrier is the free electron mass m0.31,32 In the Si well, the effective electron mass is the transverse conduction band electron effective mass 0.2 m033 near the conduction minimum since the Si is grown in the (100) direction. Fig. 3 Schematics of the energy band diagram (conduction and valence bands) of the quantum well channel structure, for a given channel thickness L. In our case, only the quantum sublevels in the conduction band are relevant since we consider an n-MOSFET device. The relative intensity of the light emission of the given transition between sub band levels i and f, which is related to the transition probability per unit of time between confined levels, is calculated from the transition probability per unit of time Γif given by the Fermi golden rule where H is the electric dipole transition operator q.r (the perturbing Hamiltonian that is responsible for the photon emission by electron recombination), ρ is the density of final states, and ψi and ψf are the initial and final electron state wave function, respectively. Because of the asymmetric nature of the transition operator, the main electron transitions can only be between states of opposite symmetry. The 1-D Schrödinger Eq. (1) can be solved numerically to obtain the different energy levels in the well. For the well thicknesses L that we are dealing with (2, 3,…9 nm), the energy levels are shown in Table 1 for the even and the odd solutions of the equation. The allowed transition energy level emission energies and wavelengths corresponding to the optical communication domain (typically 1 to 2  μm) and beyond the optical absorption threshold of silicon are also shown in the table. Table 1 Si well sublevel energies (eV) in the conduction band obtained for several well thicknesses of up to 4 nm. Emission energies (eV) and corresponding wavelengths (μm) are also summarized. Si well thickness L (nm)Even solution energy levels (eV)Odd solution energy levels (eV)Emission energies (eV)Emission wavelengths (μm)a 20.2, 2.30.950.751.65 30.11, 1.20.52, 2.20.68, 11.82, 1.24 40.076, 0.74, 2.20.32, 1.4, 3.20.66, 0.8, 11.88, 1.55, 1.24 50.045, 0.51, 1.45, 2.940.21, 0.91, 2.11.24, 0.86, 0.65, 0.841, 1.44, 1.9, 1.47 60.039, 0.37, 1.1, .210.16, 0.64, 1.5, 2.80.94, 1.13, 0.71.32, 1.09, 1.77 70.03, 0.28, 0.8, 1.6, 2.60.12, 0.49, 1.2, 2.1, 3.30.68, 1.11, 1.17, 0.92, 0.71.82, 1.11, 1.05, 1.34, 1.77 80.023, 0.22, 0.62, 1.24, 2.10.095, 0.39, 0.9, 1.6, 2.61.14, 0.85, 0.87, 0.68, 1.2, 0.981.08, 1.46, 1.42, 1.82, 1.03, 1.26 90.0186, 0.175, 0.495, 0.99, 1.665, 2.5150.077, 0.315, 0.72, 1.305, 2.07, 30.913, 0.675, 0.94, 1.13, 0.81, 1.211.36, 1.83, 1.32, 1.09, 1.53, 1.02 Calculations are limited to 2  μm for practical considerations to match the optical communication spectra. Simulated Spectrum Modeling In addition to the single line calculation, the overall emission spectra are obtained by combining the different lines in a single spectrum using a Lorentzian broadening where αL is the Lorentzian full width at half maximum and λi is the transition photon wavelength of the i’th peak among the total number of peaks Npeaks and corresponding to the allowed transition between the j and k states of opposite symmetry. The broadening is due to many reasons such as nonuniformity, temperature, and interface roughness. We assumed an full width of half maximum α of 0.05 eV, which roughly corresponded to the width of Si-based device electroluminescence at room temperature.15 Simulation Results Electrical Output Characteristics Prior to running the optical simulations, we had to simulate the electrical characteristics using a COMSOL multiphysics package for the device having a 4-nm thick channel value. The aim of those simulations was to assure that the device structure is behaving like a regular MOSFET transistor. Output characteristics, i.e., drain current versus drain voltage (IdsVds) with gate voltage Vg stepping from 0 to 4 V were simulated (Fig. 4). As expected from an MOSFET transistor, the saturation regime is reached after the linear one for each Vgs voltage of the output curves. The avalanche breakdown zone was also empirically described by exponential trend (as inserted in Fig. 4). We can extract the “activation” energy Ea of the avalanche process, defined by the exponential term eqVds/Ea, by taking the reciprocal value of the fitting slope; we get Ea=0.35  eV. Thus, this satisfying transistor behavior acts as a functionality check of the electrical simulation part. Fig. 4 Simulated characteristics of an MOSQWELL device having a channel thickness of 4 nm at breakdown regime. Optical Emission Spectra Emission wavelength as function of the channel thickness Once the device is defined in COMSOL and the allowed transitions have been calculated, we can simulate for a given transition the emission spectra for different channel thicknesses. The calculations were made for channel thicknesses of 2 to 9 nm to evaluate the relevant light-emitting transitions between the allowed intersub band energy levels as depicted in Table 1. The simulated graph results (Figs. 58) present the intensity in arbitrary units (a. u.) of the light emission, as a function of the wavelength (emission spectrum) of up to 2  μm. Note that an emission wavelength below 1.1  μm is not considered because of the silicon band-to-band optical absorption. From Fig. 5, we can see that a thickness of 3 nm does not give better emission in the desired wavelength range, since most of its light emission is at a greater wavelength. This is because, for this larger thickness, the energy of transition is lower than for the 2-nm thickness case. Despite the fact that this larger thickness allows an additional electron-confined energy level, this level is too high to allow a transition in the required wavelength domain. By further increasing the channel thickness to 4 nm and higher, the energy levels go even lower, which allows an additional transition in the required (near IR) domain, as can be seen in Figs. 5Fig. 6Fig. 78. Fig. 5 Simulated light-emission spectra for device with channel thickness of 2 and 3 nm. Fig. 6 Simulated light-emission spectra for device with channel thickness of 4 and 5 nm. Fig. 7 Simulated light-emission spectra for device with channel thickness of 6 and 7 nm. Fig. 8 Simulated light-emission spectra for device with channel thickness of 8 and 9 nm. Control of the light intensity as a function of the drain Vds As a worst case test, we selected the 4-nm thick channel device because of its relatively small peak intensity. The emission spectrum was simulated for the first emission peak located at 1.88  μm (0.66 eV energy transition) and for different electrical conditions (Vgs kept as 2 V and Vds varying from 2.34 to almost 3 V to get in the avalanche breakdown regime, as shown in Fig. 9). As expected, the intensity increases steeply with Vds, indicating that the more hot electrons are present in the channel, the stronger the light emission. More precisely, it appears that the light emission has a threshold electrical condition on Vds, which is connected to the breakdown regime. Fig. 9 Intensity of the emission spectra for several Vds values simulated for intersub bandgap of 0.66 eV (1.88-μm wavelength). Device is 4-nm-thick channel device (Vgs=2  V). The emitted light intensity is directly related to the emitted photon quantity. Since a photon is the result of one electron recombination, the more electrons there are in the channel, the more photons will be emitted and therefore the greater the light intensity. In the transistor channel, the electron quantity is exponentially related to the drain voltage, so should be the light intensity. In Fig. 10, we can fit the intensity dependence of each spectral line on Vds, by an exponential dependence like eqVds/Ea. The extraction of the activation energy Ea as the reciprocal of the slope is 1/7.39=0.13  eV. Note that it has the same order of magnitude of those found for the avalanche breakdown (Fig. 4). Fig. 10 Intensity dependence of the 1.88  μm emission peak for 4-nm thick channel device at Vgs=2  V. Location of the light emission in the channel As part of the analysis of the device behavior and following previous setup conditions, it was important to simulate and understand not only the intensity of the spontaneous light emission (1.88  μm) from the silicon channel (4 nm) but also its location and spatial distribution. Assuming that the intensity of the light emission due to the transitions between the quantum levels is proportional to the spontaneous recombination rate, we simulated the distribution of this rate in the channel, as shown in Fig. 11. The maximum value is located close to the channel edge and decays rapidly along the channel. Fig. 11 Distribution of the radiative recombination rate along the silicon channel for Vds=3.2  V. Horizontal scale is in nm. Analysis and Discussion As mentioned above, the aim of this analysis is to demonstrate the existing coupling of optical properties to the device quantum structure and electrical parameters, so, respectively, the wavelength of the emission is a function of the channel thickness, and the intensity of the peak is controllable by the drain applied voltage. Electrical Study Regarding the electrical characteristics shown in the avalanche breakdown regime, the electrons are “pumped” to higher energy sublevels of the conduction band. From this high-energy state, the electrons can relax to lower sublevels and can emit a photon. Since one photon is emitted by one electron recombination, the emitted light intensity, which is proportional to the number of emitted photons (for a given wavelength), is proportional to the current passing through the channel. As seen in Figs. 9 and 10, the light intensity is indeed increasing in a same kind of exponential trend with Vds like in the avalanche breakdown regime. In spite of the fact that we work in the avalanche breakdown regime, we remain in a relatively low Vds range (up to 2.6 V), so we can assume that there will not be a strong degradation of the device functionality. Adjustment of the Wavelength as a Function of the Channel Thickness As shown above and as expected due to the number of transitions, the thicker the channel is, the higher the number of peaks is and the higher the intensity is (Figs. 58). This allows us to choose the best thickness to obtain the higher throughput in the required wavelength and the required bandwidth. For example, if we are interested in monochromatic emission, we may choose thin layers (2 or 3 nm). We can therefore introduce a new transistor descriptive parameter that sets the emitted wavelength. Control of the Light Intensity as a Function of the Drain Vds As for the light-intensity control, the results may enable some preliminary forecast of the expected highest intensity as a function of the thickness for a defined window, in our case, the 1 to 2  μm range. Moreover, as the drain voltage controls the light intensity, we could reasonably assume that an electrical signal applied at the gate can also modulate the light emission. From these preliminary simulated results, we can expect that, by coupling Si waveguides to the channel, the optical signal can be transmitted to a detector coupled to another transistor. This is actually the elementary communication operation between two elementary devices where this communication is equivalent to the electron communication between devices or blocks. The main difference here is that there is no more RC delay limiting the electron motion and no cross talk between the devices. Since there is no cross talk, the devices can be closer and the modulation speed can get higher without tampering with the processor work. Usage and Possible Applications The need for future generations of very large-scale integrated circuits working at 10 GHz frequencies and above is a challenge that CMOS technology has tried to reach for several decades. In such frequencies, the signal propagation delay on a chip and a circuit board, as well as signal cross talk, impose severe limitations on system design and performance. A proposed solution to this problem is to move to optical signal transmission in the critical paths. The merging of the microelectronics with communication in general and with optical communication in particular vigorously pushes the efforts to realize both electronic and electro-optic functions on the same silicon chips. The devices presented in this study allow this kind of communication. This is further motivated by the limitations of the metal wiring on a chip and a PC board to transfer the electronic signals in the 10 GHz range and above. Moreover, failure mechanisms in metal interconnect, such as self-heat and electromigration, can be prevented in the case of optical communication between blocks when using optical emission through silicon waveguides. In our preliminary study, we have emphasized that QW MOSFET processing is compatible with standard CMOS technology. Moreover, SOI CMOS on SOI wafers is already a commercial process, and the expected light emission at selected wavelengths, in correlation with the silicon thickness in the transistor, may lead to the development of modified SOI CMOS technology that will include both standard CMOS transistors for the performance of the electronic functions and quantum well transistor devices for optical communication. We are aware of the very low emission intensity that may be emitted by the device; however, this concern should not be detrimental since the device is not intended to serve as a laser emitter but rather as an electro-optic modulator. In other words, the aim of this device is to act as a local emitter of IR radiation, which may be electrically modulated and eventually detected and amplified by other detectors in its close neighborhood, using a built-in waveguide. Assuming that we can build such a light-emitting transistor, we can obtain a device that can convert ultrahigh-frequency electric signal to an optical signal, which in turn can propagate without any coupling or radiative effects. This may dramatically increase the processor work frequency. In this article, we presented a new nanoscale MOSFET Quantum Well (MOSQWELL) transistor based on a GRC quantum-channel structure. The model of the device is described, and simulation results are presented. Optical simulations show promising light-emission features (both in intensity and in wavelength) adapted to optical communication. On the other hand, electrical simulation results were found promising toward future measurements on devices. To get a series of measurements, several thicknesses would be processed and characterized. The model presents several expected values of energy levels, which can enable light-emitting mechanism, and shows the control of the drain voltage on the emission intensity in the breakdown avalanche regime. 1. N. Savage, “Linking chips with light,” IEEE Spectrum, 2015,  http://spectrum.ieee.org/semiconductors/optoelectronics/linking-chips-with-lightGoogle Scholar 2. J. Mason, “Development of on-chip optical interconnects for future multi-core processors,” 2008,  http://www.smartdatacollective.com/jackmason/22794/development-chip-optical-interconnects-futureGoogle Scholar 3. A. W. Bogalecki and M. du Plessis, “Design and manufacture of quantum-confined Si light sources,” S. Afr. Inst. Electr. Eng. 101(1), 11–16 (2010). Google Scholar 4. L. Pavesi, “Will silicon be the photonic material of the third millennium?” J. Phys.: Condens. Matter 15, R1169 (2003). http://dx.doi.org/10.1088/0953-8984/15/26/201 Google Scholar 5. D. Thomson et al., “Roadmap on silicon photonics,” J. Opt. 18, 073003 (2016). http://dx.doi.org/10.1088/2040-8978/18/7/073003 Google Scholar 6. J. Faist, “Silicon shines on,” Nature 433, 691–692 (2005). http://dx.doi.org/10.1038/433691a Google Scholar 7. M. Lipson, “Guiding, modulating, and emitting light on silicon—challenges and opportunities,” J. Lightwave Technol. 23(12), 4222–4238 (2005).JLTEDG0733-8724 http://dx.doi.org/10.1109/JLT.2005.858225 Google Scholar 8. N. Ashcroft and N. Mermin, Solid State Physics, Saunder College, Philadelphia (1976). Google Scholar 9. Y. B. Bolkhovityanov and O. Pchelyakov, “GaAs epitaxy on Si substrates: modern status of research and engineering,” Phys. Uspekhi 51, 437–456 (2008). http://dx.doi.org/10.1070/PU2008v051n05ABEH006529 Google Scholar 10. A. T. Fiory and N. M. Ravindra, “Light emission from silicon: some perspectives and applications,” J. Electron. Mater. 32(10), 1043–1051 (2003).JECMA50361-5235 http://dx.doi.org/10.1007/s11664-003-0087-1 Google Scholar 11. S. Coffa, “Light from silicon,” IEEE Spectrum, 2005,  http://spectrum.ieee.org/semiconductors/processors/lightfromsiliconGoogle Scholar 12. S. Saito et al., “Electro-luminescence from ultra-thin silicon,” Jpn. J. Appl. Phys. 45(27), L679–L682 (2006).JJAPA50021-4922 http://dx.doi.org/10.1143/JJAP.45.L679 Google Scholar 13. A. Zev et al., “Nanoscale silicon-on-insulator photo-activated modulator building block for optical communication,” IEEE Photonics Technol. Lett. 28(5), 569–572 (2016).IPTLEL1041-1135 http://dx.doi.org/10.1109/LPT.2015.2503326 Google Scholar 14. W. L. Ng et al., “An efficient room-temperature silicon-based light-emitting diode,” Nature 410, 192–194 (2001). http://dx.doi.org/10.1038/35065571 Google Scholar 15. H. Rong et al., “An all-silicon Raman laser,” Nature 433(20), 292–294 (2005). http://dx.doi.org/10.1038/nature03273 Google Scholar 16. S. Koehl, V. Krutul and M. Paniccia, “Continuous silicon laser,” Intel White Paper (2005). Google Scholar 17. J. A. Kash and J. C. Tsang, “Hot luminescence from CMOS circuits: a picosecond probe of internal timing,” Phys. Stat. Sol. (B) 204, 507–516 (1997). http://dx.doi.org/10.1002/(ISSN)1521-3951 Google Scholar 18. J. C. Tsang and J. A. Kash, “Hot luminescence from CMOS circuits: a picosecond probe of internal timing,” Appl. Phys. Lett. 70(7), 889–891 (1997).APPLAB0003-6951 http://dx.doi.org/10.1063/1.118305 Google Scholar 19. J. C. Tsang, J. A. Kash and D. P. Vallett, “Optical tools for measuring timing and switching in silicon ICs: Status and future challenges,” LEOS Newsl. 15(2), 3–5 (2001). Google Scholar 20. S. K. Kurinec et al., “Utilization of electroluminescence from avalanche p-n junctions for optical testing of silicon integrated circuits,” in Proc. IEEE Micro-Electronics 8th Symp. (1989). http://dx.doi.org/10.1109/UGIM.1989.37306 Google Scholar 21. M. Bendayan, A. Chelly and A. Karsenty, “Modeling and simulations of MOSQWell transistor future building block for optical communication,” in Proc. IEEE Int. Conf. on the Science of Electrical Engineering (ICSEE), Eilat, pp. 1–5 (2016). http://dx.doi.org/10.1109/ICSEE.2016.7806052 Google Scholar 22. J. Walczak and B. Majkusiak, “Scattering mechanisms in MOS/SOI devices with ultrathin semiconductor layers,” J. Telecommun. Inf. Technol. 1, 39–44 (2004). Google Scholar 23. E. Gornik and D. C. Tsui, “Voltage-tunable far-infrared emission from Si inversion layers,” Phys. Rev. Lett. 37, 1425–1428 (1976).PRLTAO0031-9007 http://dx.doi.org/10.1103/PhysRevLett.37.1425 Google Scholar 24. L. Liu et al., “Highly efficient metallic optical incouplers for quantum well infrared photodetectors,” Nat. Sci. Rep. 6, 30414 (2016). http://dx.doi.org/10.1038/srep30414 Google Scholar 25. “Comsol multiphysics 5.1 release highlights,” Indirect Optical Transitions,  https://www.comsol.com/release/5.1/semiconductor-moduleGoogle Scholar 26. A. J. Kalinowski, “Quantum mechanics applications using the time dependent Schrödinger equation in COMSOL,” in Proc. of the 2015 COMSOL Conf., Boston (2015). Google Scholar 27. S. Jahangir and Q. D. M. Khosru, “A numerical model for solving two-dimensional Poisson–Schrodinger equation in depletion all around operation of the SOl four gate transistor,” in IEEE Int. Conf. of Electron Devices and Solid-State Circuits (2009). http://dx.doi.org/10.1109/EDSSC.2009.5394215 Google Scholar 28. M. V. Kisin, “Modeling of the quantum well and cascade semiconductor lasers using 8-band Schrödinger and Poisson equation system,” in Proc. of 2007 COMSOL Conf., Boston, pp. 489–493 (2007). Google Scholar 29. M. V. Kisin and H. S. El-Ghoroury, “Modeling of III-nitride multiple quantum well light emitting structures,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–10 (2013).IJSQEN1077-260X http://dx.doi.org/10.1109/JSTQE.2013.2242851 Google Scholar 30. B. E. A. Saleh and M. C. Teich, “Fundamentals of photonics,” in Chapter 15, Photons in Semiconductors, John Wiley & Sons, Inc., Hoboken, New Jersey (1991). Google Scholar 31. I. Filikhin et al., “Electronic and level statistics properties of Si/SiO2 quantum dots,” Phys. E: Low-Dimens. Syst. Nanostruct. 42, 1979–1983 (2010). http://dx.doi.org/10.1016/j.physe.2010.02.024 Google Scholar 32. M. Dragosavac et al., “Electron effective mass in ultrathin oxide silicon MOSFET inversion layers,” Semicond. Sci. Technol. 20, 664–667 (2005).SSTEET0268-1242 http://dx.doi.org/10.1088/0268-1242/20/8/002 Google Scholar 33. B. Van Zeghbroeck, “Principles of electronic devices,” in Detailed Description of the Effective Mass, University of Colorado Boulder, Boulder, Colorado (2011). Google Scholar Michael Bendayan received his PhD at the Technion Institute. He is part of the research team under the supervision of Dr. Avi Karsenty. He is now a professional fellow at Rafael Advanced Defense Systems Ltd. Roi Sabo received his BSc degree in the Department of Physics/Electro-Optics Engineering at the Lev Academic Center. He is part of the research team under the supervision of Dr. Avi Karsenty. He now works in the high-tech industry as an electro-optical engineer. Yaakov Mandelbaum received his BA and MA degrees in mathematics from the University of Pennsylvania, in 1993 and 1995, and was part of the Applied Mathematics Program at the Massachusetts Institute of Technology (1993–1995). He completed his MSc degree at Racah Institute of Physics (1998–2001). After working in QLight Nanotech (2011–2013) and QuantUp (2014–2015), he joined the Physics/Electro-Optics Department at the Lev-Academic-Center (2013). He has started his PhD studies under the supervision of Dr. Karsenty and Prof. Zalevsky. Avraham Chelly received his MSc degree in material science from the Université Louis Pasteur, France, in 1992 and his PhD in solid state physics from the Université de Haute Alsace, France, in 1997. He held a postdoctoral position at Hebrew University of Jerusalem, Israel. He worked as a research engineer at the Microelectronics Laboratory in 2004, and he moved to Bar-Ilan University, where he established the Advanced Semiconductor Devices Laboratory. He is involved in nanoscale electro-optics devices’ research and lectures. Avi Karsenty received his MSc degree and PhD in applied physics and material science (microelectronics and electro-optics) from Hebrew University of Jerusalem in 1996 and 2003, respectively. After 22 years in high-tech industries, part of which with Intel Electronics Corporation (1995–2011), today, he is the head of the Physics/Electro-Optics Engineering Department and of the Excellence Empowerment Program at the Lev Academic Center. He has been awarded 38 awards and is a senior member of IEEE and OSA. His major research field is quantum coupled devices. Michael Bendayan, Roi Sabo, Roei Zolberg, Ya'akov M. Mandelbaum, Avraham Regis Chelly, Avi Karsenty, "Electrical control simulation of near infrared emission in SOI-MOSFET quantum well devices," Journal of Nanophotonics 11(3), 036016 (23 August 2017). http://dx.doi.org/10.1117/1.JNP.11.036016 Submission: Received 1 March 2017; Accepted 1 August 2017 Submission: Received 1 March 2017; Accepted 1 August 2017 Quantum wells Near infrared Optical communications Back to Top
4ba91a709d214ded
Relativity: Relatively uninteresting for chemists? The periodic table of elements is among the corner stones of chemistry. In the given order, in both the periods (rows) and groups (columns), elements show some specific properties. However, when going around the bottom of the periodic table (transitions metals) few elements show unusual properties due to relativistic effects. For instance, gold is not silvery like most of the other metals in the same group. Mercury is a liquid unlike its immediate neighbours, cadmium and gold. These differences can be explained by the concepts of quantum chemistry. Initially, the concepts of quantum chemistry were developed without considering the theory of relativity (the Schrödinger equation, 1925). Latter, Paul Dirac (1928) formulated a relativistic wave equation to account for relativistic effects. They are treated depending on the electron speed relative to the speed of light. Hence, they are more pronounced for heavy elements where electrons attain relativistic speeds. As such, if we calculate the properties of gold without considering relativistic effects, then we get silvery colour, which is not in line with the reality. However, if relativistic corrections are included, the calculated values become more close to the reality. This is mainly due to the contraction of 6s orbital and expansion of 5d orbital of gold making it to absorb in a different region of the electromagnetic radiation compared to silver. Nuclear magnetic resonance (NMR) properties are among the molecular properties affected by relativistic effects. NMR spectroscopy is a powerful spectroscopic technique used to get detailed information about the chemical environment, dynamics and structure of molecules. Chemists use it to determine molecular structures by analysing for example the chemical shielding (σK, the difference between the applied external magnetic field and the field at the nucleus, caused by the surrounding shells of electrons) in parts per million (ppm), chemical shift (δK, the difference between the chemical shielding of a certain nucleus K and the same nucleus in a reference molecule) in ppm, and spin-spin coupling constant (J, a measure of the interaction between neighbouring nuclei of a molecule) in Hertz. Medical practitioners use the multidimensional NMR imaging technique, magnetic resonance imaging (MRI), for diagnostic purposes. In some cases, determination of NMR properties using experiments may need additional input from quantum chemistry. For instance, until recently, the chemical shielding of a certain nucleus was estimated after getting half information (the paramagnetic contribution to the chemical shielding, σpara) from the electronic contribution to the experimental nuclear spin-rotation (NSR) constant and the other half (the diamagnetic contribution to the chemical shielding, σdia) from calculations using quantum chemistry (see the following equation). This approach may give reasonable results for very light atoms. However, it fails when considering heavy atoms if relativistic corrections are not taken into account. Eqn1_TayeDemissieIn this study, the diatomic molecules of silicon selenide (SiSe), silicon telluride (SiTe), germanium selenide (GeSe), germanium telluride (GeTe), tin selenide (SnSe), tin telluride (SnTe), lead selenide (PbSe) and lead telluride (PbTe) were considered to show the importance of relativistic corrections to the nuclear absolute shielding constants. Details of the contributions of the NSR constants and NMR shielding constants are presented in the paper. The σpara contributions obtained from direct shielding calculations are compared with electronic contributions to the NSR constants (Cel in the above equation). I presented the sources of the errors of the previously reported values of all the nuclei in the above molecules. For instance, the difference between Cel and σpara of Si in SiSe is 16 ppm, however, it increases to 6321 ppm for Pb in PbSe (see the figure below). Note that lead is very heavy compared to silicon. The error for the selenium nucleus in most of the molecules is »270 ppm, whereas it is »1125 ppm for the tellurium nucleus. The difference increases as the atoms become heavier showing the importance of relativistic corrections. Fig. 1. To sum up, relativistic corrections to σpara of the absolute shielding constants are very significant compared to σdia. This shows that determining σpara from Cel causes unrecoverable error on the total absolute shielding constant. The results also show the shortcomings of the old assumption of getting the absolute shielding constants indirectly from Cel using the above equation. By taking care of the relativistic corrections, I presented new accurate absolute shielding scales of all nuclei aiming that the results will be used for future benchmarking of similar theoretical as well as related experimental studies. The approach I followed in this study can be used as an immediate remedy for such kind of magnetic property determinations. Taye B. Demissie (PhD) Centre for Theoretical and Computational Chemistry, Department of Chemistry, UiT The Arctic University of Norway, N-9037 Tromsø, Norway Theoretical analysis of NMR shieldings in XSe and XTe (X = Si, Ge, Sn and Pb): the spin-rotation constant saga. Demissie TB Phys Chem Chem Phys. 2016 Jan 20 Leave a Reply
05c1db557a150ed6
Quantum Mechanics and Decision Theory By Sean Carroll | April 16, 2012 8:20 am Several different things (all pleasant and work-related, no disasters) have been keeping me from being a good blogger as of late. Last week, for example, we hosted a visit by Andy Albrecht from UC Davis. Andy is one of the pioneers of inflation, and these days has been thinking about the foundations of cosmology, which brings you smack up against other foundational issues in fields like statistical mechanics and quantum mechanics. We spent a lot of time talking about the nature of probability in QM, sparked in part by a somewhat-recent paper by our erstwhile guest blogger Don Page. But that’s not what I want to talk about right now. Rather, our conversations nudged me into investigating some work that I have long known about but never really looked into: David Deutsch’s argument that probability in quantum mechanics doesn’t arise as part of a separate ad hoc assumption, but can be justified using decision theory. (Which led me to this weekend’s provocative quote.) Deutsch’s work (and subsequent refinements by another former guest blogger, David Wallace) is known to everyone who thinks about the foundations of quantum mechanics, but for some reason I had never sat down and read his paper. Now I have, and I think the basic idea is simple enough to put in a blog post — at least, a blog post aimed at people who are already familiar with the basics of quantum mechanics. (I don’t have the energy in me for a true popularization at the moment.) I’m going to try to get to the essence of the argument rather than being completely careful, so please see the original paper for the details. The origin of probability in QM is obviously a crucial issue, but becomes even more pressing for those of us who are swayed by the Everett or Many-Worlds Interpretation. The MWI holds that we have a Hilbert space, and a wave function, and a rule (Schrödinger’s equation) for how the wave function evolves with time, and that’s it. No extra assumptions about “measurements” are allowed. Your measuring device is a quantum object that is described by the wave function, as are you, and all you ever do is obey the Schrödinger equation. If MWI is to have some chance of being right, we must be able to derive the Born Rule — the statement that the probability of obtaining a certain result from a quantum measurement is the square of the amplitude — from the underlying dynamics, not just postulate it. Deutsch doesn’t actually spend time talking about decoherence or specific interpretations of QM. He takes for granted that when we have some observable X with some eigenstates |xi>, and we have a system described by a state $latex |psirangle = a |x_1rangle + b |x_2rangle , $ then a measurement of X is going to return either x1 or x2. But we don’t know which, and at this stage of the game we certainly don’t know that the probability of x1 is |a|2 or the probability of x2 is |b|2; that’s what we’d like to prove. In fact let’s just focus on a simple special case, where $latex a = b = frac{1}{sqrt{2}} . $ If we can prove that in this case, the probability of either outcome is 50%, we’ve done the hard part of the work — showing how probabilistic conclusions can arise at all from non-probabilistic assumptions. Then there’s a bit of mathematical lifting one must do to generalize to other possible amplitudes, but that part is conceptually straightforward. Deutsch refers to this crucial step as deriving “tends to from does,” in a mischievous parallel with attempts to derive ought from is. (Except I think in this case one has a chance of succeeding.) The technique used will be decision theory, which is a way of formalizing how we make rational choices. In decision theory we think of everything we do as a “game,” and playing a game results in a “value” or “payoff” or “utility” — what we expect to gain by playing the game. If we have the choice between two different (mutually exclusive) actions, we always choose the one with higher value; if the values are equal, we are indifferent. We are also indifferent if we are given the choice between playing two games with values V1 and V2 or a single game with value V3 = V1 + V2; that is, games can be broken into sub-games, and the values just add. Note that these properties make “value” something more subtle than “money.” To a non-wealthy person, the value of two million dollars is not equal to twice the value of one million dollars. The first million is more valuable, because the second million has a smaller marginal value than the first — the lifestyle change that it brings about is much less. But in the world of abstract “value points” this is taken into consideration, and our value is strictly linear; the value of an individual dollar will therefore depend on how many dollars we already have. There are various axioms assumed by decision theory, but for the purposes of this blog post I’ll treat them as largely intuitive. Let’s imagine that the game we’re playing takes the form of a quantum measurement, and we have a quantum operator X whose eigenvalues are equal to the value we obtain by measuring them. That is, the value of an eigenstate |x> of X is given by $latex V[|xrangle] = x .$ The tricky thing we would like to prove amounts to the statement that the value of a superposition is given by the Born Rule probabilities. That is, for our one simple case of interest, we want to show that $latex Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] = frac{1}{2}(x_1 + x_2) . qquadqquad(1)$ After that it would just be a matter of grinding. If we can prove this result, maximizing our value in the game of quantum mechanics is precisely the same as maximizing our expected value in a probabilistic world governed by the Born Rule. To get there we need two simple propositions that can be justified within the framework of decision theory. The first is: Given a game with a certain set of possible payoffs, the value of playing a game with precisely minus that set of payoffs is minus the value of the original game. Note that payoffs need not be positive! This principle explains what it’s like to play a two-person zero-sum game. Whatever one person wins, the other loses. In that case, the value of the game to the two participants are equal in magnitude and opposite in sign. In our quantum-mechanics language, we have: $latex Vleft[frac{1}{sqrt{2}}(|-x_1rangle + |-x_2rangle)right] = – Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] . qquadqquad (2)$ Keep that in mind. Here’s the other principle we need: If we take a game and increase every possible payoff by a fixed amount k, the value is equivalent to playing the original game, then receiving value k. If I want to change the value of a playing a game by k, it doesn’t matter whether I simply add k to each possible outcome, or just let you play the game and then give you k. I don’t think we can argue with that. In our quantum notation we would have $latex Vleft[frac{1}{sqrt{2}}(|x_1+krangle + |x_2+krangle)right] = Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] +k . qquadqquad (3)$ Okay, if we buy that, from now on it’s simple algebra. Let’s consider the specific choice $latex k = -x_1 – x_2 $ and plug this into (3). We get $latex Vleft[frac{1}{sqrt{2}}(|-x_2rangle + |-x_1rangle)right] = Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] -x_1 – x_2. $ You can probably see where this is going (if you’ve managed to make it this far). Use our other rule (2) to make this $latex -2 Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] = -x_1 – x_2 , $ which simplifies straightaway to which is our sought-after result (1). Now, notice this result by itself doesn’t contain the word “probability.” It’s simply a fairly formal manipulation, taking advantage of the additivity of values in decision theory and the linearity of quantum mechanics. But Deutsch argues — and on this I think he’s correct — that this result implies we should act as if the Born Rule is true if we are rational decision-makers. We’ve shown that the value of a game described by an equal quantum superposition of states |x1> and |x2> is equal to the value of a game where we have a 50% chance of gaining value x1 and a 50% chance of gaining x2. (In other words, if we acted as if the Born Rule were not true, someone else could make money off us by challenging us to such games, and that would be bad.) As someone who is sympathetic to pragmatism, I think that “we should always act as if A is true” is the same as “A is true.” So the Born Rule emerges from the MWI plus some seemingly-innocent axioms of decision theory. While I certainly haven’t followed the considerable literature that has grown up around this proposal over the years, I’ll confess that it smells basically right to me. If anyone knows of any strong objections to the idea, I’d love to hear them. But reading about it has added a teensy bit to my confidence that the MWI is on the right track. • anon. This is cute, but one aspect of it is bothering me. Believing in QM and understanding decoherence gets you to the point that Hamiltonian evolution in the presence of an environment gives you states that have some “weight,” measured by the Hilbert space measure, clustered around apparent classical outcomes. The inner product, which measures this “weight,” is an intrinsic part of QM, I think. I see the problem of deriving the Born Rule as being the problem of showing that if you repeat an experiment a number of times, the frequencies approach those corresponding to counting these states by the Hilbert space weight. In other words, the inner product isn’t just a mathematical device that hangs around, it plays a key role in determining observable outcomes. So: where’s the inner product on Hilbert space hiding in the argument you outlined above? It might be hiding in some assumption about how the x states are normalized, but can it be made explicit in a way that shows that this is really addressing the right question? The step from the equation just before “You can probably see where this is going” to the equation just after makes implicit use of the inner product. (Update: oops, not true, see #6 and #7 below.) Note that we switched the order of |x_1> and |x_2> in the sum, which wouldn’t have been possible if they didn’t have equal amplitudes. • MPS17 Thanks for the post. Zurek has some ideas on this too. Although I haven’t read the paper, I heard the talk and they seemed more in line with ways we physicists like the approach problems. UPDATE: I think this links to the original literature. I haven’t thought carefully about this so please excuse if this discusses a differently nuanced issue: I think it is the same kind of issue, and Zurek’s papers are extremely interesting. Instead of talking about decision theory, he talks about symmetries. He claims that, once we allow for the existence of an environment, there is a new symmetry (“envariance”) that applies to states like (1), so that the probabilities of getting x_1 and x_2 must be equal. From there the same reasoning applies. There is some critique along the lines of “Zurek shows that if it’s appropriate to think of quantum mechanics in terms of probabilities at all, then those probabilities should obey the Born Rule, but he doesn’t actually demonstrate the need for probabilities.” It’s not clear to me that this couldn’t also be applied to Deutsch’s argument. But this is philosophical terrain, and I think the underlying thrust of Deutsch and Zurek are actually quite similar, although using quite different vocabularies. • http://www.dudziak.com will the 1/sqrt(2) does not seem justified, and as that is the crux of the discussion, this argument does not convince me well. You might as well replace 1/sqrt(2) with a variable ‘m’ for example throughout all the equations, and your final conclusion would be just as “correct”. With 1/sqrt(2) removed, the whole argument becomes a tautology… interesting no doubt, but proving nothing except that the author is well versed in basic algebra. • http://mattleifer.info Matt Leifer Sean, that is not using the inner product. It is simply using the vector space structure. You can’t assume that the inner product has any a-priori relevance within this approach because that is what you are trying to derive, i.e. the only reason you pay attention to things like inner products and unitarity within conventional quantum mechanics is because you are trying to avoid negative probabilities, but you have no reason for connecting those two things until you have first derived the Born rule. I too like this argument, although I have my own version of it that makes use of Gleason’s theorem which I prefer, since it tells you that you should structure your probability assignments according to traces of operators against some density operator, even if you don’t know what the “wavefunction of the universe” is. There are legitimate issues surrounding the interpretation of probability in this approach, i.e. should one also be trying to derive a limiting frequency. Many of these issues are not specific to QM, since people differ on whether this is required even in the classical case. However, whether or not you think frequencies are required, it must be admitted that getting the decision theoretic interpretation right is even more important. After all, if I could derive a relative frequency, but was not able to derive the fact that I should use probabilities to inform my decisions then that would be a complete disaster. What use is it if I can derive that a fair quantum coin should have limiting 50/50 relative frequencies, but not that I should consider a bet on heads at stake $1 that pays $2 to be fair? There are also issues surrounding the very meaning of terms like “probability” and “utility” in this approach, since we are assuming that all outcomes actually occur. The two concepts get mushed together into something like a “caring weight” which measures how much we should care about each of our successors at the end of a quantum experiment. If you think about that for a minute it leads to moral issues, e.g. why should I care less about a successor who lives in a branch that happens to have a small amplitude. In the analagous classical case we can say it is because there is a very small chance that such a successor will exist, but quantum mechanically they definitely will exist. Thus, one can question whether it is moral to accept a scenario in which you get a large sum on money on a large amplitude branch, but die a horrible painful death in another branch, even with an amplitude that is epsilon above zero. In light of the Deutsch-Wallace argument, this indicates one of two things, either: – The usual intuitions about decision theory break down in a many-worlds scenario. – They do not break down, but we would always use extremal utilities, which makes it vacuous. By an extremal utility, I mean one that is infinity or -infinity on some outcomes, e.g. dying a painful death. The principle of maximum expected utility is useless in such cases. I have a lot more to say on this subject, but not the energy to go into it right now. I do have a paper on the backburner at the moment that deals with these issues. Matt– You’re right, I was being very sloppy. That’s just the vector-space structure. The role of the inner product is essentially what you’re trying to derive, as you say. Thanks for the other comments. As you say, most of the additional issues refer to the nature of probability (or the definition of “value”), not really specifically to quantum mechanics. will– The argument certainly isn’t a tautology. Of course you could replace the 1/sqrt{2} by any number, as long as the coefficient of both terms is the same (that’s what was used in the argument just referenced). But that’s what you want! If that number were something else, you would have a non-normalized wave function. But you would still want to have equal probabilities for two branches with equal weights. • Peli Grietzer This fantastic paper by Adrian Kent has some great arguments about why the ‘but what does speaking about probabilities even mean’ issue for MW is sharply unlike any similar issues that arises for one-world theories: http://arxiv.org/abs/0905.0624 • CU Phil There is quite a bit of criticism of the decision-theoretic proposal (most vociferously from David Albert and Adrian Kent) as well as several papers advocating the approach in this volume: The review gives a nice summary of the debate. Also, Bob Wald reviewed the above volume in Classical and Quantum Gravity: and also gives an insightful review. • Michael Bacon I don’t think that Kent’s argument succeeds in proving the failure of the Everett program. However, assuming that his argument does succeed, Kent goes on to say that such Everettarian failure “adds to the likelihood that the fundamental problem is not our inability to interpret quantum theory correctly but rather a limitation of quantum theory itself.” Perhaps, but at least for now, my money remains on quantum theory. • http://mmcirvin.livejournal.com/ Matt McIrvin @will: The requirement that state vectors have norm 1 is already a requirement of quantum mechanics separate from any interpretation of amplitudes as probabilities. Given that, the factor of 1/sqrt(2) (up to some arbitrary complex phase) is necessary if the two terms have equal coefficients. Once you make any move in the direction of a probabilistic interpretation, the Born rule falls out as the only one that makes mathematical sense; there are many ways of demonstrating this. But that first step is a doozy, and I always have the sneaking suspicion that arguments like this one have somehow smuggled their conclusion in as part of an assumption that only seems less controversial. • http://mmcirvin.livejournal.com/ Matt McIrvin …my own favorite handwaving quasi-derivation of the Born rule was a probably-not-original stochastic argument that I thought up on a long walk along the Charles River many years ago. Consider the Feynman path integral for a particle that travels from point A to point B. Now suppose that you put a screen between point A and point B that randomly tweaks the particle’s wavefunction phase to a different value at each point (maybe coarse-grain it a little to make the math tractable: divide it into tiny “pixels” that each have a different random phase factor). Now consider the amplitude that the particle goes from point A to point B traveling through some coarser-grained but still small bundle of pixels. The amplitudes for each pixel will add like a random walk, yielding an overall amplitude that increases as the square root of the number of pixels. Which is exactly what you’d get by interpreting the square of the amplitude as a probability. • Moshe I’m puzzled about something really basic: you are trying to argue for an expression that is quadratic in the coefficients a,b of your wavefunction (something that encodes in it interference, the essential mystery of QM). Instead you are deriving an expression which is linear in these coefficients (as pointed out, you have only used the linear structure of the Hilbert space, not the inner product). The derivation seems to use in an essential way the equality of both coefficients a=b, and of course that is precisely the only case where quadratic and linear expressions have the same consequences. But, what happens in the generic case? For example, what happens if a,b only differ by a phase? that should still lead to the same final expression. It seems to me that if you put a=-b and repeat your derivation, you’d find the same minus sign in the RHS of (1), instead of the result predicted by the Born rule. Moshe– I encourage you to put a minus sign in front of the x_2 term and go through the math. :) Obviously there is work to be done generalizing to other amplitudes, but that’s done in the paper; I don’t think there’s much controversy about that part. • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel Sean: Like Peli Grietzer, I highly recommend Kent’s criticism of the decision-theory approach. To add to what Peli said, I think Kent conclusively shows that the axioms of decision theory in the many-worlds context are not nearly as obvious as they first appear, to the point that they become much less attractive than approaches which rest on Gleason’s theorem like Matt Leifer suggests. Of course, this is all truly philosophy; the game here is to try to reduce the axioms of quantum mechanics to their most beautiful (and, usually, simple) form. Sometimes, this improvement is so dramatic that I think everyone should agree that the new axioms are superior [such as my advisor Zurek’s work–which I am constantly advertising–showing that the mysterious declaration that observables be Hermitian operators can be traced back to the linearity of evolution and the need for amplification (http://arxiv.org/abs/quant-ph/0703160)]. But sometimes, it’s just a matter of taste. Also, I’d like to clarify Michael Bacon’s comment. Kent’s paper strongly concentrates on attacking the decision-theoretic basis of Born’s rule, and only addresses the attractiveness of quantum theory in general as an aside. In particular, by the “Everett program”, Kent means that claim that quantum theory need not be supplemented by an ad-hoc assumption for extracting probabilities. I believe Kent is open to the idea that quantum theory need not be modified *if* a sufficiently attractive assumption can be found which allows the extraction of unambiguous probabilities (e.g. if the “set-selection problem” in the consistent histories framework could be solved, which he has written about). But yes, Kent does take the extreme difficulty of finding non-ad-hoc assumption as weak evidence that quantum theory is fundamentally wrong. • Michael Bacon You obviously are closer to this than I am, and you may well be right that all Kent really thinks is that the extreme difficulty of finding non-ad-hoc assumptions is “weak” evidence that quantum theory is fundamentally wrong. However, that’s not what the language I quoted says. At least here, he’s clearly saying that there is a “likelihood” that quantum theory is wrong — i.e, more likely than not. And, that his work merely adds to that “likelihood”. Nevertheless, perhaps I’m making too much of the particular words he chose to describe his view. By the way, I love the picture of you in your natural environment on your web page. 😉 • Anonymous Coward I’d be interested to how you view the relation to classic thermodynamics. There, likewise, a probability distribution “falls out of the sky”. There is some justification in things like the Sinai-Boltzmann Conjecture, stating that the standard (Liouville-phase-space-) measure is the only sensible one (uniquely ergodic, for the toy problem of hard-ball billiard)… IF you assume, that the god who has chosen the initial conditions of the world has done so with an absolutely continuous probability distribution (SRB-measure). If you admit “pathological” probability measures, the entire argument collapses unto itself. I always viewed, maybe naively, the Born rule as a similiar thing. People conjecture and hope to prove at some point, that the Born rule follows if we make the pretty basic (and mind-boggingly subtle!) assumption that the initial conditions of our universe have been picked compatibly with some infinite-dimensional generalization of Lebegue-measure. [sorry for the theistic metaphor… personificating some aspects of nature helps me think more clearly] I think it’s certainly a good question. People like Albrecht and Deutsch believe that the only way to justify any classical probability distribution is ultimately in terms of the Born Rule. I wouldn’t necessarily think it’s a failure if the answer is “that’s the most natural measure there is,” but I’m hopeful that some better picture of the connection between QM and classical stat mech (plus perhaps some initial-conditions input from cosmology) explain why the Liouville measure is the “right” one. • Moshe I see where I was confused: you are using a linear structure in the space of eigenvalues, not for the coefficients, so the value for a=-b is not determined by the above considerations. I should probably take a look at the paper sometime, sounds mysterious how one can get anything quadratic from what you wrote so far. • Ben Hi Sean, I remember a great lecture by Nima Arkani-Hamed at T.A.S.I. 2007, http://physicslearning2.colorado.edu/tasi/hamed_02/SupportingFiles/video/video.wmv , where he points out that the Born Rule can be derived from the operator postulate, i.e. that physical measurement outcomes can be identified with the eigenvalues of a corresponding Hermitian operator. The argument is as follows: Construct the tensor-product state of N identically prepared copies of a|x1> + b|x2>. This could be expanded out using binomial coefficients. There is a Hermitian operator N1 which counts how many copies are in the state |x1>. Then if we take N1/N in the limit N to infinity, we obtain a Hermitian operator whose eigenvalue is |a|^2, i.e. it is the probability operator. So we get the Born Rule for free! • http://jbg.f2s.com/quantum2.txt James Gallagher You can’t get fundamental probability out without putting fundamental probability in, the Everett approach is just untenable and even quite ridiculous imho compared to just accepting that fundamental randomness exists – then the Born rule emerges as a kind of thermodynamic property of the Schrödinger evolution – the Bohmian guys have even demonstrated this (based on their wrong ontological model) Also, as I keep trying to tell everyone, the past universe does not exist, you have to look at the (discrete) flow of the Schrödinger evolution exp(hL).U(t) – U(t) to describe what we observe, and in this case we get 3D space as period-3 points in the Hilbert Space. • Colin The math is incorrect on your equation 3 (and also in Deutsch’s original paper). You only add 1/rt2 K to each outcome of the game on the left side of the equation, whereas you add an entire K to the right side of the equation. In reality, where you have 1/rt2(x1+x2) standing in place for the entire system Psi, you can do one of two things to manipulate the equation: value psi as game with only one outcome, and add a single k to each side (trivial)….Or you can keep 1/rt2(x1) and 1/rt2(x2) separate, and add an entire K to each…but still only 1 k on the right. EDIT…I noticed that this argument is a little skewed as you are adding K to each eigenstate…so it’s not the simple math; but the premise is still correct…what has been added to each outcome on the left is not what as been added to teh entire game on the right. If I started with V(Psi>) instead of V(1/rt2x1> +1/rt2x2>) (which are identical by assumption), I would add K to get V(psi>+k). • Pingback: Daily Run Down 04/16/2012 | Wayne's Workshop() • Pingback: Linkblogging for 16/04/12 « Sci-Ence! Justice Leak!() • http://qpr.ca/blog/ Alan Cooper The reference to state vectors of form |x+k> seem to be as eigenvectors for the operator X+k rather than for X, so I am not clear that it makes sense to say |x1-(x1+x2)>=|-x2> (In fact, making the operator explicit, we would seem to have |x1-(x1+x2) for X-(x1+x2)>=|x1 for X> not |-x2 for X>) And in any case the argument seems to be showing that if there was an expectation function with the expected properties then it would have to satisfy the Born rule. But that is not the same as saying that such a function should actually have a probabilistic interpretation. (Actually I guess this is the same complaint as what you alluded to in the second para of your coment#4 but I do think it’s a serious one.) • http://prce.hu/w/ Huw Price I’m disappointed that CU Phil thinks that my objections in the Many Worlds@50 volume are less vociferous than those of Adrian Kent and David Albert! My piece is here: http://philsci-archive.pitt.edu/3886/ I think that the plausibility of the Deutsch-Wallace axioms actually presupposes what needs to be shown, viz that there is some analogue of classical uncertainty in the MW picture. Moreover, if we assume that the argument is a good one with that assumption made explicit, then we can exploit a point noticed by Hilary Greaves to show the assumption must be false. Here’s how. Let P = “There is a suitable analogue of classical uncertainty in MW”, and Q= “Rationality requires that any Everettian agent should maximise her expected in-branch utility, using weights given by the Born Rule”. Then if the Deutsch argument works, it establishes: 1. If P then Q. But Greaves’ observation shows that Q simply can’t be true, because MW introduces a new kind of outcome that an agent may have preferences about, namely the shape of the future wave function itself (or at least, the portion of it causally downstream from the agent’s current choice). In effect, Q is telling us that rationality requires us to prefer future wave functions with a characteristic feature, that of maximising Born-rule-weighted in-branch utility. But this is obviously and trivially wrong, in the case of an agent who has preferences about the shape of the wave function itself, and just prefers (i.e., assigns a higher utility to) some other kind of future wave function. Decision-theoretic rationality tells us what to do, given our preferences. It doesn’t tell us what our preferences should be. (But wouldn’t such an agent already be crazy, for some other reason? No — see the paper for details.) Given Greaves’ observation, then, there are only two possibilities: either P is false, or the Deutsch argument fails — either way, it’s bad news for the project of making sense of MW probabilities in terms of decision-theoretic considerations. • http://www.pipeline.com/~lenornst/index.html Len Ornstein It’s seems you’re approaching this ‘problem’ as a Platonist, looking for a model(s) which comes closest to some preconceived (and not widely entertained) concept of an absolutely true representation of reality – rather than from the general scientific requirement that a model(s) status must be judged by how closely the empirical record can be matched? For the Platonist ‘test’, the issue is whether or not Born’s added ‘axioms’ and his formulation fit together with QM better than do the construction of Decision Theory and ITS unique axioms – perhaps an Occam’s Razor type question. For the more generally accepted scientific requirement – match to the empirical record – you have so far offered no arguments to distinguish the ‘performance’ of Born’s probability interpretation from that of a Decision Theoretical approach! • http://skepticsplay.blogspot.com miller So… suppose that we have N identical systems, each with state |x1>+|x2>, where x1 and x2 are eigenvalues of operator X. And suppose we have an operator Y which represents a simultaneous measurement of X in all of the N systems. Operator Y gives the value 1 if nearly half of the measurements of X result in x1. Otherwise, operator Y gives the value 0. If I understand Deutsch’s paper, we cannot say that a measurement of Y has a high probability of returning 1. But if we are rational decision makers, we would treat the expected value of Y as being close to 1 (and getting even closer to 1 as N goes to infinity). This may not prove that the results actually follow the frequency distribution given by Born’s rule, but it sure seems like the next best thing. Huw– I’ll admit I haven’t read your paper or Greaves’s, but that objection doesn’t seem very convincing at face value. Can’t we just say that preferences are something that people have about outcomes of measurements, not about wave functions? Outcomes are what we experience, after all. • http://prce.hu/w/ Huw Price Sean, I don’t think that response is going to help Deutsch and Wallace, who are trying to establish a claim about any rational agent, not just about agents with the kind of preferences we happen to have. But in any case, it is easy to think of examples of preferences for wave functions of the kind my objection needs, which are themselves grounded on what the wave functions imply about the experiences of people in the branches of those wave functions — e.g., a preference for a wave function in which I don’t get tortured in any branches (even very low weight branches), over a wave function in which I do get tortured in a very low weight branch, but get rich in all the high weight branches. (My Legless at Bondi example in the paper is much like this, and I discuss why MW makes such a difference, compared to the classical analogue.) • http://qpr.ca/blog/ Alan Cooper The key equation seems to be asserting that the “Value” (expectation?) of an observation (of the observable X-(x1+x2)) where the possible values are -x2 and -x1 is the same as the subtracting (x1+x2) from the Value of an observation (of X) where the possible values are x1 and x2. And then the application of (2) seems to be saying that the Value of that observation (of the observable X-(x1+x2)) where the possible values are -x2 and -x1 is the negative of the Value of an observation (of X) where the possible values are x1 and x2. But if (2) is being applied this way – ie without regard to which observable is involved and so without regard to which of the two terms is associated with which value, then isn’t that essentially assuming that equal probabilistic weights are being assigned to each of the two outcomes which amounts to begging the question of probabilistic weights being equal when the vector magnitudes are? (After all, the principle that negating the payoffs negates the expectation requires keeping the same probabilities, and switching cases only works if the probabilites are equal: eg 1/3(-x1)+2/3(-x2)= -{1/3(x1)+2/3(x2)} but {1/3(-x2)+2/3(-x1)} is not the same) • http://alastairwilson.org/ Alastair Wilson I think the Greaves/Price objection is a serious worry for probability in EQM in general, and for the decision-theory strategy in particular. Assigning objective probabilities to outcomes does seem to presuppose the possibility of uncertainty about which of the outcomes will occur. But EQM seems to say they all occur. So there’s a prima facie problem here. (Greaves’ response is: so much the worse for probability in Everett, but Everettians can do without it.) Wallace doesn’t think the problem is too serious these days (in contrast to his older papers which argue that Everettians must make sense of ‘subjective uncertainty’) – roughly, he now thinks that the objection appeals to pre-theoretic intuitions about the nature of uncertainty, and that intuition is unreliable in such areas. However, in his new book he does provide a semantic proposal which allows us to recover the truth of ordinary platitudes about the future (like ‘I will see only one outcome of this experiment’)’, by interpreting them charitably as referring only to events in the speaker’s own world. I have a new paper forthcoming in British Journal for Philosophy of Science which argues that the Greaves/Price objection can be met on its own terms, by leaving the physics, the epistemology and the semantics alone and instead tinkering with the metaphysics. Here’s the link: http://alastairwilson.org/files/opieqmweb.pdf Sean’s remarks above capture the spirit of my suggestion nicely: if Everett is right, then our ordinary thought and talk about alternative possibilities *just is* thought and talk about other Everett worlds. To reply to Huw’s last points from this perspective: a) if Everett worlds are (real) alternative possibilities then any possible rational agent (not just one with preferences like ours) is going to be an agent with in-branch preferences, b) the kinds of ‘preferences for wave-functions’ that you describe can be made sense of on this proposal, though I would describe them differently; they correspond to being highly risk-averse with respect to torture. “Last week, for example, we hosted a visit by Andy Albrecht from UC Davis.” What do you think of Andy’s de Sitter equilibrium cosmology (e.g. http://arxiv.org/abs/1104.3315 and references therein)? Philip– I think it’s an interesting idea, although the chances that it’s right are pretty small. Andy takes the requirement of accounting for the arrow of time much more seriously than most cosmologists do, which is a good thing. But his intuition is that the real world is somehow finite, while my intuition is the opposite. (Intuition can’t ultimately carry the day, of course, but it can guide your research in the meantime.) • http://prce.hu/w/ Huw Price Alastair, Thanks for the link, though as you know, I prefer to tinker with metaphysics as little as possible 😉 Concerning your (a), my point doesn’t depend at all on denying that we have in-branch preferences, but only on pointing out that the new ontology of the Everett view makes it possible for us to have another kind of preference, too — a preference about the shape of the future wave function. Concerning (b), any ordinary notion of risk-aversion is still a matter of degree, whereas the worry about low weight branches isn’t a matter of degree. So you’ll need infinite risk aversion, won’t you? And in any case, what does the response buy you? A demonstration that the choices of an ordinary agent in an Everett world should be those of a highly risk-averse agent in a classical world? That doesn’t seem good enough, for the Deutsch-Wallace program. They want to show that the ordinary agent should make the same choices in the two cases. • Daryl McCullough I’m not sure I understand what you’re saying. In Sean’s derivation, all the states are eigenstates of the X operator. The meaning of the state |x> is the eigenstate of the X operator with eigenvalue x. |x+k> is an eigenstate of the X operator with eigenvalue x+k. Sean’s assumptions might make more sense to you if we explicitly introduce some additional operators. Let T(k) be the operator (the translation operator) defined by T(k) |x> = |x+k>. Let P be the operator (the parity operator) defined by P |x> = |-x>. We assume that they are linear, which means T(k) (|Psi_1> + |Psi_2>) = T(k) |Psi_1> + T(k) |Psi_2> P (|Psi_1> + |Psi_2>) = P |Psi_1> + P |Psi_2> So Sean’s assumptions about the value function V(|Psi>) are basically: (1) V(|x>) = x (2) V(T(k) |Psi>) = V(|Psi>) + k (3) V(P |Psi>) = – V(|Psi>) (2) and (3) follow from (1) for eigenstates of the X operator, but we need the additional assumption that they hold for superpositions of eigenstates, as well. • http://qpr.ca/blog/ Alan Cooper ok – Maybe trying three times is considered rude, but I would really appreciate it if someone could explain what I have wrong here. In the Deutsch paper we have “It follows from the zero-sum rule (3) that the value of the game of acting as ‘banker’in one of these games (i.e. receiving a payoff -xa when the outcome of the measurement is xa) is the negative of the value of the original game. In other words” followed by your equation (2). But acting as banker is *not* the same as just having a *set* of outcome values which are the negatives of those of the player. They also have to be matched to the outcomes – ie it is the *ordered* sets which must be negatives. And in the case with Y=X-(x1+x2) it is in the situation where X sees x2 that Y sees -x1 and in the situation where X sees x1, Y sees -x2. This is not the same as Y being the “banker” when X is the “player” so I don’t see why the values should sum to zero. Please, what am I missing? • Daryl McCullough I don’t understand what you mean when you say “in the case with Y=X-(x1+x2) it is in the situation where X sees x2 that Y sees -x1 and in the situation where X sees x1, Y sees -x2” That doesn’t agree with the meaning of the “game” as described. I think you’re confusing a sum of states with a tensor product of states. There is no need to talk about X and Y. You only need to talk about one operator, X. The game works by starting in a state |Psi>, measuring X in that state to get a value x. If x > 0, then the banker pays the player x dollars. If x < 0, then the player pays the banker -x dollars. So it's not that the banker measures one observable and the player measures a different one. There is only one measurement, and that determines who pays who. The banker's winnings are always the negative of the player's winnings. • Sudip Dear Sean, It seems to me that assuming the “two simple propositions” is just a way of putting the Born rule through a backdoor. Of course, they seem very intuitive but how sure can we be that nature upholds them? I’m reminded of Neumann’s proof of the impossibility of deriving QM from a deterministic theory. As Bell pointed out Neumann made seemingly innocuous assumptions which may not be true. After all why V|x+k> has to be V|x>+k? Why it can’t be V|x>+k^2? I understand that these are justified using decision theory. However decision theory is a theory of decision making by rational agents – why should it have any relevance in the natural world? I admit that I haven’t looked at Deutsch’s paper or at Zurek’s paper mentioned in the comments. On a related note, do you know of any attempt of defining what constitutes a measurement in the context of MWI? As the wave function branches there it seems to me that a fully formulated theory should explain where those branchings occur. • Anonymous Coward As far as I understood MWI (correct me if I’m wrong; I didn’t read Everetts’s paper, just a coule of graduate textbboks) the words “branching” and “measurements” should be viewed as a heuristic description of the following process and theorem: Suppose you do a (for simplicity spin of an electron) measurement; the measurement is described by a unitary operator $U_M$ (time propagation of your apparatus). You call it a branching into two possible worlds (orthogonal subsapces spanning the entire Hilbert space of the MWI-world) $+$ and $-$, if the time-propagation for all later times leaves these subspaces almost invariant. If this should be the case, we can simplify all further calculations by projecting onto one of the subspaces and calculating the future evolution of each of these branches (“collapse the wavefunction”). What a nifty trick to get approximate results! Everetts contribution was to show that for suitable limits (larger hilbert-space, many particles, suitable definition of “almost invariant”) and actual measurement devices (full QM toy models of amplifiers), this does in fact occur. Therefore, Schroedinger’s equation alone implies the very good heuristic of collapsing wave-functions. Furthermore, if we should ever wish to assign weights to different branches, the only way to do this consistently is the Born rule — where consistently means “If I collapse after two measurements and calculate the evolution until the scond measurement in full QM, I get roughly the same result as if I collapsed after the first measurement and again afer the second one”. This way, even if we believed in magical “Kopenhagen collapse induced only by human observers”, Everett has shown that “occasional collapse + Born rule” yields very good approximate methods to calculate time-evolution until the “magical collapse”. From here it is not far fetched to postpone the “magical collapse” into the far future or *gasp* remove it at all. Futhermore, we can set out to precisely define “branching in the sense of invariance of subspaces up to $varepilon$” or “up to order such and so”. However, the words “branching” or “measurement” without further qualifiers should remain a (undefinable but not meaningless) heuristic, like “two points are close”. • Daryl McCullough Sudip writes: The meaning of |x> is that it is a state such that the measurement of operator X is certain to produce result x. So the expected result of an X-measurement is V(|x>) = x. Similarly, |x+k> is a state such that the measurement of X is certain to produce result x+k. So V(|x+k>) is x+k. • Neal J. King What leaves me unsatisfied about this approach is that you are postulating the existence of an operator V with a complete set of states that behaves in the manner indicated; and then applying the inferred “Born’s rule” to the rest of quantum mechanics. Can you make the argument work for real quantum operators that we have some reason to believe in? Like the z-component of spin-1/2 ? • Hal S I am not entirely sure why the Born rule is hard to understand. The point of process is to allow one to use the computational flexibility associated with functions on the order of the reals and extract certain features of those functions (like the peaks and valleys…or extrema of the function). Remember, the wave function itself is a continuous deterministic function. More specifically we operate in the complex plane in order to exploit the computational power associated with manipulating systems with uncountable basis’. If we accept the information extraction interpretation, the question is how to economise that process. Since we are dealing with complex numbers, and we are dealing with countable features of the wave function, we can ask the question what happens when we take the function to other powers. Since 2 is the smallest prime number, we can interpret any even numbered power to simply being a rescaling of the information associated with squaring the number. If we consider odd powers, we can interpret the effect as being a rescaling of the wave function by some real number. If we consider all the potential combinations, one quickly must consider all possibilities, and essentially one quickly realizes that what the are really doing is trying to capture all the information in the wave function and essentially are also building a type of matrix that should be recogizable as an operator in a type of transformation procedure. In any case, squaring the amplitude is a process that economizes the information extraction from the complex plane into a series of integer indexed real numbers. • Hal S It makes me wonder if one can make and argument that if all the trivial zeros of the zeta function lie on the real line, then all the non-trivials have to be on the one-half real line. Interesting. • Pingback: The Alternative to Many Worlds « My Discrete Universe() • Sudip @Anonymous Coward Thanks, that’s helpful. @Daryl Sorry, I didn’t mean to say that. Of course V|x+k>=V|x>+k by definition. What I intended to ask was why should V act linearly on a superposition of kets? • http://jbg.f2s.com/quantum2.txt James Gallagher The biggest criticism of Sean’s post is that the argument fails to explain why the Born Rule must obey a squared power relation rather than a quartic or higher one. Even Pauli recognized this problem back in 1933 (republished in english translation in his ‘General Principles of Quantum Mechanics’ Ch 2 p15) where he deduced that the Born Rule must be a positive definite quadratic form in ‘psi’ and anything not involving the product of psi.psi* would not be conserved by the Schrödinger Evolution so we only have terms in psi.psi* = |psi|^2 and higher powers as possibilities. Pauli, being a genius, realised that only Nature then determines that the rule is a squared one (rather than a higher even power) and ultimately the rule is fixed by experimental observation – not deduced from anything simpler (and certainly not from obfuscating arguments involving rational beings and decision theory!) I mentioned above that there is a Bohmian argument for how the absolute squared law is emergent from the dynamics of the Evolution ( eg http://arxiv.org/abs/1103.1589 ) – this is true unless your initial distribution was a higher power invariant one – so the squared power one seems favoured on a positive measure set of starting distributions (maybe even measure 1). But you can just chuck away all the troublesome baggage that the Bohmian model entails and accept fundamental randomness – then the squared power rule is the most likely outcome, a large numbers result – ie it is a thermodynamic property of the evolution • Hakon Hallingstad Sean @ 16 and Moshe, If one carries out the calculation for a|-x2> – a|-x1>, one comes to the equation: V[a|x2> – a|x1>] + V[-a|x2> + a|x1>] = x1 + x2. However under the assumptions above, we are not allowed to assume V[a|x2> – a|x1>] = V[-a|x2> + a|x1>] and so the derivation stalls at this point. It is absolutely crucial for the argument that the coefficients in a|x1> + b|x2> are equal, contrary to QM which allows an arbitrary phase. For instance V[a|x1> + b|x2>] = (|a| x1 + |b| x2) / (|a| + |b|), would be consistent with the 2 axioms. As far as I can tell, the axioms imply – V is linear in x1 and x2 – The coefficient of x1 is some function f(a, b), with f(1, 0) = 1 – The coefficient of x2 is f(b, a) = 1 – f(a, b) In order to show V is the expected average value of a measurement of X, one will have to prove f(a, b) = |a|^2/(|a|^2 + |b|^2), so there is still a lot of derivations left to be done. And showing the coefficient goes as |a|^2 is the hard part of the Born rule. • http://alastairwilson.org/ Alastair Wilson Huw – actually, I’d have thought that freely modifying metaphysics in situations like this is congenial to pragmatism. The ‘harder’ scientific claims of physics, confirmation theory, natural language semantics, etc aren’t meddled with; we just pick (on a pragmatic basis) whichever metaphysical framework allows the harder claims to hang together most naturally. On a) – I was suggesting that any possible agent is going to be an agent with *only* in-branch preferences – sorry for being unclear. From the perspective I advocate, the whole state of the wavefunction is a non-contingent subject-matter: the only contingency is self-locating. On a functionalist account of mental states, it makes no sense to ascribe preferences defined over non-contingent subject-matters. (What’s going on here is that the modal framework is helping reinforce Wallace’s ‘pragmatic’ argument for his principle Branching Indifference.) On b) – yes, the equivalent of wanting to avoid torture in any world, in the limiting case of infinitely many worlds, will be infinite risk aversion. Is that a problem? (In any case, the limiting case might turn out to be metaphysically impossible – that’s an empirical matter.) What the response is meant to buy is a translation between ‘preferences over wavefunctions’ and ordinary preferences. Everettians who take this line can explain away the apparent coherence of preferences over wavefunctions by showing that they’re just ordinary kinds of preferences (i.e. preferences about self-location) under an unfamiliar mode of presentation. • Hal S Just one last note. Using ‘ to represent an index, an equation that makes some of the previous comments clearer is = Sum (E’ |z’|^2) which is understood as meaning that the probability of seeing eigenvalue E is the absolute value of complex number z squared. Now Dirac has some interesting points that should be considered in ‘The Principles of Quantum Mechanics 4th ed’. “One might think one could measure a complex dynamical variable by measuring separately its real and imaginary parts. But this would involve two measurements or two observations, which would be all right in classical mechanics, but would not do in quantum mechanics, where two observations in general interfere with one another-it is not in general permissible to consider that two observations can be made exactly simultaneously,..” “In the special case when the real dynamical variable is a number, every state is an eigenstate and the dynamical variable is obviously and observable. Any measurement of it always gives the same result, so it is just a physical constant, like the charge of an electron.” “Even when one is interested only in the probability of an incomplete set of commuting observables having specified values, it is usually necessary first to make the set a complete one by the introduction of some extra commuting observables and to obtain the probability of the complete set having specified values (as the square of the modulus of a probability amplitude), and then to sum or integrate over all possible values of the extra observables.” So an observer can not make two simultaneous measurements of the same observable, physical constants are real numbers, and if you don’t have enough indices to fully describe the state you add more indices and consider all potential values. Since this procedure can continue indefinitely one begins running into the same problems with the continuum. The point in this rambling is that although we can not know whether such higher order hierarchy has real existence, we have to resort to it from a computational standpoint. • http://van.physics.illinois.edu/qa/index.php Michael Weissman Just a quick semi-coherent placeholder note, since I have to run now. As you say, the issue of P in MW is much trickier than if you have some sort of extra collapse in which to insert special new rules. The traditional argument justifying Born is the one that Ben refers to, reproduced by Arkani-Hamed, but that’s long since been known to be invalid, since the limiting procedure is irrelevant. On Deutsch and decision theory: “Given a game with a certain set of possible payoffs, the value of playing a game with precisely minus that set of payoffs is minus the value of the original game.” What does a “precisely minus payoff” even mean, except in the context of little financial games, where the statement is well-known to be false? The question is not so much what a rational actor would bet, but how the existence of rational actors can be reconciled with the unitary structure+decoherence. The problem becomes one of why the probabilities for sequential observations factorize, i.e. why the chance of Schroedinger’s cat having survived the Tuesday experiment doesn’t change on Wednesday due to quantum fleas. As has repeatedly been shown, only the standard quantum measure gives the conserved flow needed to allow that factorization and hence allow the existence of rational actors. So that’s a requirement but not an explanation. The best (only) explanation I’ve seen is by Jaques Mallah. If the state consists of the usual part we think about plus some maximal entropy white noise, a physical definition of a thought as a robust quantum computation, together with ordinary signal-to-noise constraints on robustness (square-root averaging), gives the Born rule from ratios of counts of thoughts! Why that particular (mixture of low S +high S parts) starting state? Mallah doesn’t like this idea but I suggest the old cheat: anthropic selection. If that type of state is needed to allow the existence of rational actors, nobody will be arguing about why they find themselves part of some other type of state. I’ll try to get back to fill this in more coherently in 24 hours. p.s. Zurek’s paper sneaks in context-independent probabilities, and thus doesn’t really address the core question. • Abram Demski How do the coefficients enter into the story at all? It looks like assumptions (2) and (3) make just as much sense if the coefficients for the two states are different, but if that’s true, the we can derive (1) for the case when the coefficients are different as well… in other words, taken at face value, the argument seems to prove that V[a|x_1>+b|x_2>]=1/2(x_1+x+2) no matter what ‘a’ and ‘b’ are. • Abram Demski I revoke my previous question (after actually trying to carry though the math). • Michael Weissman I should make at least one small correction to my hasty and over-compact note. The background entropy in Mallah’s picture is high, not maximal. • Hakon Hallingstad Since this article doesn’t explain where the absolute square of the amplitude comes in with Deutsch’s argument (48), I have read his paper which introduces it in equations 16 – 21. However I don’t understand the argument. It would be great if someone could explain why the value of eq. 18 equals the LHS of eq. 16, i.e. why is V[|x1>|y1> + … + |x1>|ym> + |x2>|y m+1> + … + |x2>|yn>] = V[sqrt(m) |x1> + sqrt(n – m) |x2>] when y1 + … + ym = y_{m+1} + … + yn = 0? Can this actually be derived or is it an axiom? If the former, it does seem to rely on the state vectors being normalized, which would also need to be postulated? • Hakon Hallingstad • Hal S Got a copy of Pauli’s book. Good stuff. I like this on the first page, written in 1933 “The solution is obtained at the cost of abandoning the possibility of treating physical phenomena objectively, i.e. abandoning the classical space-time and causal description of nature which essentially rests upon our ability to separate uniquely the observer and the observed.” Combined with the fact that any bound state can be represented in a quantum field theory, it appears we are getting closer to completely abandoning any notion that general relatively is even needed. • http://qpr.ca/blog/ Alan Cooper Daryl, Thank you for responding (@36&38) to my question. Unfortunately I have been away for a few days and so have been slow to respond, but I hope you are still around and following this discussion as I remain puzzled. I have no problem with agreeing that your conditions (1)(2)(3) imply the Born rule (and similarly for Sean’s and David’s similarly numbered equations) but I still don’t see how these are implied by decision theory without essentially assuming the Born rule to start with. Yes, the “states” in question are all eigenfunctions for the same observable, but on the two sides of each value equation (other than (1)) they correspond to different eigenvalues so they are not actually the same states. In fact the decision theoretic increment of value that is expected from replacing X by X+k and the reversal that comes from replacing X with -X seem to me to be obvious only if we work with the same state and consider the observable to be what is changing. To ask for these to also apply when the operator stays the same but the states are changed seems to involve an implicit assumption that V(a1|x1>+a2|x2>) is a linear combination of x1 and x2 with coefficients p1(a1,a2) and p2(a1,a2) which are independent of |x1> and |x2>. And to me that looks very much like begging the question. Is there a way to show (without assuming the usual expectation formula) that V(X+k,|Psi>)=V(X, T(k)|Psi>? • http://qpr.ca/blog/ Alan Cooper What seems odd about this business of starting with the Hilbert space and inferring a probabilistic interpretation after the fact, is that the Hilbert space itself arises naturally as a way of representing the possible families of probability distributions for observables. In that approach, pioneered by von Neumann and Mackey, and nicely developed and summarized in the books by Varadarajan, the starting point is a lattice of questions (observables with values in {0,1}) and the notion of probability for these seems to be no less elementary than that of decision theoretic “value” since the expected value of a proposition in any state is just the same as the probability that it is observed to be true. • Hakon Hallingstad Here’s an example where (2) and (3) is consistent with a different probability rule than Born’s. Just before we’re measuring the observable X (or “playing the game” in Deutsch’ terminology), we will scale |psi> such that the sum of the expansion coefficients is 1. 1. |psi> = a1 |x1> + a2 |x2> 2. a1 + a2 = 1 This scaling is not as physically illogical as you might think, for instance the collapse of the wavefunction can also be viewed to contain a rescaling of the observed eigenvector immediately after/during the observation. Let |base> be the sum of all eigenvectors of X. 3. |base> = |x1> + |x2> + … I’m going to show that the following definition of the expected value of the measurement (“payoff”) satisfies (2) and (3) in this article. 4. V[|psi>] = <base| X |psi> Here’s how (2) is satisfied: 5. V[a1 |x1 + k> + a2 |x2 + k>] = a1 (x1 + k) <base|x1 + k> + a2 (x2 + k) <base|x2 + k> = a1 x1 + a2 x2 + k = V[a1 |x1> + a2 |x2>] + k Above, <base|x1 + k> is 1 since |base> is the sum of the eigenvectors, and |x1 + k> is an eigenvector. Similarily, (3) is satisfied because: 6. V[a1 |-x1> + a2 |-x2>] = -a1 x1 <base|-x1> – a2 x2 <base|-x2> = -(a1 x1 + a2 x2) = -V[a1 |x1> + a2 |x2>] The main result in the article is then reproduced easily: 7. V[(|x1> + |x2>) / 2] = (x1 + x2) / 2 • Hakon Hallingstad Let me follow the same arguments in this blog article and Deutsch’s, to prove something other than the Born rule: 0. V[a1 |x1> + a2 |x2> + …] = |a1| x1 + |a2| x2 + … To be able to make the argument we will need to postulate that the state vector should be scaled just prior to the measurement, such that the sum of the absolute values of the probability amplitudes is 1, instead of being normalized. 1. If |psi> = a1 |x1> + a2 |x2> + …, then |a1| + |a2| + … = 1 Because of this postulate, instead of (|x1> + |x2>) / sqrt(2) we will use (|x1> + |x2>) / 2, etc. If we now assume the equivalent of equation (2) and (3) from this blog article: 2’. V[(|-x1> + |-x2>) / 2] = -V[(|x1> + |x2>) / 2] 3’. V[(|x1 + k> + |x2 + k>) / 2] = V[(|x1> + |x2>) / 2] + k we will end up with the equivalent equation for exactly the same reasons made in this article, since the sqrt(2) never comes into the derivation. Let’s move over to Deutsch’s article, and the chapter “The general case”. We first want to prove the equivalent of equation (Deutsch.12): 12’. V[(|x1> + |x2> + … + |xn>) / n] = (x1 + x2 + … + xn) / n The proof is made by induction in two stages. Now I must admit that I don’t understand the first stage, but it doesn’t sound like that will be a problem for my argument (the sqrt(2) is again not used). For the second stage we can use the same arguments of “substitutibility”. Let V[|psi1>] = V[|psi2>] = v, then: 13’. V[(a |psi1> + b |psi2>) / (|a| + |b|)] = v If we now set 14’. |psi1> = (|x1> + … + |x n-1>) / (n – 1), |psi2> = | V[|psi1>] >, a = n – 1, b = 1 Then (13’) implies: 15’. (x1 + x2 + … + x_{n-1} + V[|psi1>]) / n = V[|psi1>] Note, (15’) is identical to (Deutsch.15). Now to the crucial part of Deutsch’s argument. What we want to show and the equivalent of (Deutsch.16) is: 16’. V[ (m |x1> + (n – m) |x2>) / n] = (m x1 + (n – m) x2) / n and (Deutsch.17), (Deutsch.18), and (Deutsch.20) are: 17’. sum_{a = i}^m |ya> / m or sum_{a = m + 1}^n |ya> / (n – m) 18’. (sum_{a = i}^m |x1>|ya> + sum_{a = m + 1}^n |x2> |ya>) / n 20’. (sum_{a = i}^m |x1 + ya> + sum_{a = m + 1}^n |x2 + ya>) / n Again, we’re allowed to do this according to the postulate, because we’re just about to do a measurement, and then we need to scale such that the sum of the absolute values of the probability amplitudes is 1. Equations (Deutsch.19) and (Deutsch.21) are not changed. (Deutsch.22) obviously reads: 22’. sum_a p_a |x_a>, sum_a p_a = 1 The next arguments may pose a problem. They’re supposed to show that even though the above results are valid for p_a being rational numbers, they should also apply if p_a is a real number. For instance the unitary transformation is imagined to transforms eigenvectors into a higher eigenvalued eigenvectors. The value of the game is then guaranteed to increase. Not so with our postulate, since we need to scale our state vector just prior to a measurement, and in general the scale factor would be different before a unitary transformation and after. I’m guessing there is an argument for proving how to extend it to real numbers, but I just don’t see it yet. So for now, we will have to be content with the probability amplitudes being rational numbers. The conclusion of all of this is that the normalization of the state vector is crucial for Deutsch’s derivation. • DanW @Hakon Hallingstad: your reasoning here is, I’m afraid, totally bogus. I’m not trying to be nasty and I’m sure you won’t take it as such since you seem in other posts to be pretty keen on learning properly how to do these things. One particular error I can spot: “For instance the unitary transformation is imagined to transforms eigenvectors into a higher eigenvalued eigenvectors. ” In what follows, m* = “hermitian conjugate of m” not, “times by” :-). Unitary operators have eigenvalues of magnitude 1. to see this, consider that the definition of a unitary operator is that its inverse is equal to its hermitian conjugate. UU* = 1 by the definition of unitarity. If U|a> = m |a> , this implies <a| U* = <a| m*, but = by unitarity definition. From above, = m m* , hence m m* = 1. This means that the magnitude of m is 1. So you can’t have a “unitary transformation” that makes “the eigenvalues higher”. It is a contradiction in terms. • Hakon Hallingstad > your reasoning here is, I’m afraid, totally bogus. […] Right, assumption (61.1) does not hold in Quantum Mechanics proper. I’m interested in knowing about other flaws you can point out, and to see whether those flaws can also be applied to Deutsch’s original arguments. > One particular error I can spot. […] I was too careless with my choice of words, so you misunderstood me. I was only trying to refer to Deutsch’s argument on page 12, for instance he says “Now, if U transforms each eigenstate |xa> of X appearing in the expansion of |psi> to an eigenstate |xa’> with higher eigenvalue.” See there for details. Discover's Newsletter Cosmic Variance Random samplings from a universe of ideas. About Sean Carroll See More Collapse bottom bar
6fffad28d21de508
A+ A A- Chemical Dynamics and Kinetic Modelling In recent years, quantum chemistry has become truly accurate, with uncertainties comparable to typical uncertainties in many experiments. This should be leading to a complete transformation of chemistry, but so far it has not. A major cause of this failure has been that accurate quantum chemistry calculations of interesting observables (e.g., product mixture composition in organic synthesis and heterogeneous catalysis, kinetic isotope effects and rates of low-temperature reactions) are pretty complicated, and often require the efforts of several professional quantum chemists, each a specialist in a certain step of the calculation. We are working on next-generation algorithms in which many of these calculations will be routinely performed by the scientists interested in the problem, rather than by computational chemists who do not know the real physical system. Quantum Mechanical Effects in Chemical Dynamics chemical dynamics quantum mech effects The inclusion of quantum mechanical nuclear effects (such as zero point energy and tunneling) in the calculation of chemical reaction rates is of particular importance. The role of these effects is well-known from textbooks: changes in zero point energy between the reactants and the transition state are responsible for the observed kinetic isotope effects in a wide variety of reactions, and tunneling can increase the rate of an activated proton transfer reaction at low temperatures by several orders of magnitude. The exact inclusion of these effects in calculations of chemical reaction rates is one of the most challenging tasks of modern theoretical physical chemistry, because even assuming that a reliable electronic potential energy surface (PES) is available the computational effort that is needed to solve the reactive scattering Schrödinger equation increases exponentially with the number of atoms in the reaction. We are working on developing approximate methods to overcome this problem and to provide a practical way to include quantum mechanical effects in reaction rate calculations. Advanced Methods for Discovery of Elementary Chemical Reactions and Prediction of Chemical Reaction Networks chemical dynamics advanced algorithmsWe are working on the development of advanced automated algorithms for discovering important new chemical reactions. The problem of finding unexpected reactions is very challenging because it scales exponentially with the number of atoms in the reactant(s). The key to significantly improving the scaling is to use evolutionary algorithms which use all the information that is known about the Potential Energy Surface (PES) and chemical bonds to improve the probability that the next search step will be near a saddle point. The algorithm will then use the computed energy, gradients and Hessian at that search point to “learn” more about the PES landscape and provide better informed decisions about which points to search next. Prognosis of Site-Selective Chemical Reactivity for Organic Molecules prognosisofsiteselectivechemicalThe goal of this project is to create a collection of fast and reliable algorithms to make a prognosis of the reactivity of organic molecules from their structure employing quantum chemistry calculations. Traditionally, computational analysis of possible reaction pathways requires working with large datasets. There are some general-purpose workflow engines that allow users to organize and schedule different tasks using a graphical user interface. However, quantum chemistry calculations yield results, which cannot be used by most organic chemists directly. The output of the calculations must be translated into language or formalism to point out their chemical relevance. Although there are billions of reactions involved, only a limited number of factors or reactivity principles exist, which apply to the vast majority of chemical reactions. For example, factors such as “Acidity”, “Basicity”, “Lewis basicity”, are important reactivity descriptors for the largest number of reactions. The main idea of this project approach is to automate analysis of proposed reaction pathways by the calculating their key parameters. We strongly believe that the project will help chemists to understand various reaction mechanisms and to discover new reactions. Algorithms for Optimization of Heterogeneous Catalysts algorithmsforoptimazationofheteroThe development of efficient algorithms of computational catalytic design with minimal human intervention and optimal computational expenses represents one of the main challenges of present-day theoretical chemistry and physics. We are working on a novel approach for computational screening heterogeneous catalysts with variable-composition simulations of the material. At the core of our algorithm is an evolutionary algorithm which incorporates “learning from history” done through selection of the low-energy structures and high-catalytic activity to become parents of the new generation. Combining it with automated algorithms for screening catalytic activity of heterogeneous catalysts provides systematic and exhaustive tools for screening a set of chemically varied complex compounds. With the proper choice of the descriptor of catalytic activity, all relevant parameters can be automatically analyzed and the most promising materials identified. The method has several innovative characteristics that allows for its application to probe complex materials as it is automated and requires minimum human intervention. We are working on several interesting applications. Selected Publications gender equality
3ca74cfdf6859ffc
Next Article in Journal / Special Issue On the Application of Stark Broadening Data Determined with a Semiclassical Perturbation Approach Previous Article in Journal / Special Issue Line-Shape Code Comparison through Modeling and Fitting of Experimental Spectra of the C ii 723-nm Line Emitted by the Ablation Cloud of a Carbon Pellet Article Menu Export Article Atoms 2014, 2(3), 334-356; doi:10.3390/atoms2030334 Spectral-Kinetic Coupling and Effect of Microfield Rotation on Stark Broadening in Plasmas Alexander V. Demura 1,* and Evgeny Stambulchik 2 National Research Centre “Kurchatov institute”, Kurchatov Square 1, Moscow 123182, Russia Faculty of Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Author to whom correspondence should be addressed; Tel.: +7-499-196-7334; Fax: +7-499-943-0073. Received: 13 May 2014; in revised form: 17 June 2014 / Accepted: 2 July 2014 / Published: 30 July 2014 : The study deals with two conceptual problems in the theory of Stark broadening by plasmas. One problem is the assumption of the density matrix diagonality in the calculation of spectral line profiles. This assumption is closely related to the definition of zero wave functions basis within which the density matrix is assumed to be diagonal, and obviously violated under the basis change. A consistent use of density matrix in the theoretical scheme inevitably leads to interdependence of atomic kinetics, describing the population of atomic states with the Stark profiles of spectral lines, i.e., to spectral-kinetic coupling. The other problem is connected with the study of the influence of microfield fluctuations on Stark profiles. Here the main results of the perturbative approach to ion dynamics, called the theory of thermal corrections (TTC), are presented, within which the main contribution to effects of ion dynamics is due to microfield fluctuations caused by rotations. In the present study the qualitative behavior of the Stark profiles in the line center within predictions of TTC is confirmed, using non-perturbative computer simulations. foundations of Stark broadening theory; density matrix; coupling between population and spectral distribution; microfield fluctuations caused by its rotations; MD simulations In Memory of Professor of Moscow Physical and Engineering Institute—Vladimir Il’ich Kogan (11 July 1923–7 December 2013)—passionate outstanding scientist, pioneer of plasma physics and nuclear fusion research at Kurchatov Institute of Atomic Energy, eminent enthusiastic lecturer, great kind teacher and famous witty connoisseur of courtly linguistics and many more… Atoms 02 00334 i021 1. Introduction The study of Stark broadening in experiments, theory and simulations has up to now achieved significant progress [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78]. This allowed the beginning of profound detailed comparisons of various computer codes developed for calculation of spectral lines profiles in plasmas, which was the main purpose of the first two SLSP workshops [74]. However, the understanding and comparison of realizations of most successful contemporary codes on a wide set of physical cases (see [74,75,76,77] and other articles of this issue) give good reason to once more discuss, check, and revise the physical notions and ideas which form the foundation of the contemporary theory of spectral line broadening by plasmas. The present article pursues these aims. In fact, this article deals only with two questions from the list of conceptual problems in the theory of Stark broadening by plasmas [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77]. The first one is the construction of the spectral line broadening theory without assumption of the density matrix diagonality. Very often this assumption cannot be validated when there is an interaction mixing of states whose energy splitting is less than or comparable with the magnitude of interaction [42,43,44,45,50,53]. A consistent introduction of density matrix inevitably leads to interdependence of atomic kinetics, describing the population of atomic states with the Stark profiles of spectral lines that evidently could be defined as spectral-kinetic coupling [42,43,44,45,50,53]. Usually, this is also interrelated with the appearance of interference effects [42,43,44,45,50,53]. The other problem is related to the attempts to separate the influence of microfield fluctuations on Stark profiles into two components, perpendicular and parallel to the microfield direction [5,6], in order to separate the contribution from microfield rotation effects [12,23,24,25,30,33], predicted to be dominating in the central part of the line [23,24,25]. The necessity of this discussion arose when it was recently shown that existing approaches to accounting for ion dynamics give results that differ from one another [74,75,76]. This inspired attempts to describe ion dynamics effects in terms of physical mechanisms, instead of practically tacit conventional numerical comparison of complicated simulations that began with the first works done using the Model Microfield Method (MMM) in the 1970s [14,15,28,29]. The basics of the theory of thermal corrections for Stark profiles along with the results of [7,8,9,12,23,24,25] are given in Section 3. Later, the predictions, given in [12,23,24,25] about significance of microfield rotation effects in ion dynamics, were confirmed within other approaches [30,33], but until now it has not been explicitly shown in the results of computer simulations (see also [75]). In the present work, simultaneously with [75], the approximate way to separate microfield rotation effects in the profile of Ly-alpha within MD simulations is realized and described in Section 4. However, this separation could not in fact be achieved rigorously, due to the existence of correlations between statistical characteristics and dynamics of atomic systems that we call statistical-dynamical coupling. The specially designed numerical experiments allowed for confirming the qualitative predictions of behavior of the ion dynamical Stark profiles for Ly-alpha, given earlier in [23,24,25]. This includes: (1) predominance of microfield rotation in the formation of the central part of the Stark profile in plasmas for lines with the central components; (2) a specific spectral behavior of the Stark profiles near the line center as a function of the plasma temperature and reduced mass of the perturber-radiator pair; (3) a universal spectral behavior of the difference Stark profiles for two different reduced mass of the perturber-radiator pair. For the purpose of discussion a short review of theoretical approaches and methods developed for and applied to the study of the Stark broadening by plasmas is entwined into the argumentation of each section. 2. Spectral-Kinetic Coupling Let us consider broadening of the hydrogen atom in the well-known setting of the so-called standard theory (ST), related to formulation given in [17,18,21,51]. Then the total Stark profile of spectral line I(Δω) is represented as the convolution of the Stark subprofiles, corresponding to the transitions between upper and lower substates being broadened by electrons in the fixed value of ion microfield, and integrated over ion microfield values with the microfield probability distribution function as a weight: Atoms 02 00334 i001 In Equation (1) Δω = ω − ω0 is the detuning of cyclic frequency from the line center, the outer angle brackets correspond as usual to an averaging over the microfield values F (F is the absolute value of microfield), Atoms 02 00334 i022 is the operator of density matrix, Atoms 02 00334 i023 is the operator of dipole moment, Atoms 02 00334 i024 is the operator of resolvent, Atoms 02 00334 i025 is the operator of the linear Stark shift between sublevels of the upper n and lower level n’ in the line space (the direct product of subspaces of the upper and lower energy levels with the principal quantum numbers n and n’), Atoms 02 00334 i026 is the electron broadening operator, indexes αβ and α’β’ designate quantum states of the upper and lower levels in the bra and ket vectors of the line space, respectively. Conventionally assuming that density matrix is diagonal in Equation (1), it could be factored out since in the region of small values of microfield all sublevels are degenerate and thus equally populated [17,18,21]. For now, the fine structure splitting is neglected. In this region, the spherical quantum functions (labeled by the quantum numbers n, l, m) form a natural zero basis of the problem due to the spherical symmetry [17,18,21]. Suppose now that the value of the ion microfield is increased, thus the sublevels are split due to the linear Stark effect in the ion electric microfield, and the degeneracy is partially removed. Now the parabolic quantum functions (labeled by the parabolic quantum numbers n1, n2, m) form the natural zero basis of the problem due to the cylindrical symmetry [17,18]. For sufficiently large values of the microfield, the impacts of electrons could not equate populations of sublevels [17,18], and thus the assumption of population equipartition [17,18,21], made as an initial condition, becomes violated. Evidently, since the density matrix could not have the diagonal equipartition form in the two different bases, the initial assumption of the theory [17,18,21] becomes invalid (see [42,43,44,45]). This simple example thus shows that a more consistent approach should simultaneously consider atomic kinetics and formation of spectral line profiles, that just signifies the spectral-kinetic coupling [42,43,44,45,50,53]. Those drawbacks in constructing Stark profiles under assumption of diagonal form of density matrix are weakened partially, if one introduces the dependence of electron impact broadening operator on the Stark levels splitting in the ion microfield [17,18]. Then the non-diagonal matrix elements will be the next order corrections in comparison with the impact widths for large values of the ion microfield, and terms responsible for the line mixing will drop down more rapidly in the line wings [17,18]. On the other hand, for small values of the ion microfield, for which the Stark splitting is of the order of the non-diagonal matrix elements of the electron-impact-broadening operator, the Stark components collapse to the center, thus becoming effectively degenerate [16,17,18], as it should be due to the straightforward physical reasoning (it is worth reminding that it was academician V.M. Galitsky who pointed out this effect to the authors of [16]). This collapse phenomenon is characterized by the appearance of the dependence of the decay constants and intensities of the “redefined” inside the collapse region “Stark components” on the microfield, while the energy splitting of these “Stark components” disappears. These “redefined” Stark components appear in the process of solution of the secular equation for the resolvent operator in the line space [13,14,15]. At F = 0, the intensity of one of the “redefined” components becomes equal to zero, while the other one gives contribution to the center of the profile identical to the contribution of the two symmetrical lateral Stark components without their redefinition during the solution of secular equation for the diagonalization of resolvent [14,15,16]. Therefore the collapse phenomenon of Stark components signifies the necessity to change the wave functions basis from the parabolic to spherical wave functions, or vice versa, depending on whether ion microfield value decreases or increases. Simultaneously, of course, this means the region of singularity for the ST assumption of the density matrix diagonality [21,51]. From this consideration it is obvious that the collapse phenomenon has the kinetic character and in fact is one of the examples of spectral-kinetic coupling. Thus the existence of the collapse phenomenon of Stark components at the same time means the necessity of more complete consideration of the Stark broadening within the formalism of kinetic equations for the density matrix [42,43,44,45,50,55], or in other terms, the necessity of application of the kinetic theory of Stark broadening. The spectral-kinetic coupling discussed above is a common thing in laser physics [42]. Indeed, the lasing condition is directly connected with the difference of populations that, in turn, is proportional to the non-diagonal matrix elements of the density matrix (called coherences), describing the mixing of the upper and lower levels due to the interaction with the radiation field [42]. In the density matrix formalism it is necessary to solve the kinetic equation that leads to the system of much larger rank in comparison with conventional amplitude approach that significantly complicates finding the solution [42,43,44,45,50,55]. Moreover, the construction of terms, describing sinks and sources, is not straightforward as during their derivation it is necessary to average over a subset of variables taking account of specific physical conditions [37,42,43,44,45]. So, as a rule, these terms could be derivedin a more or less general form only in the impact limit, and their concrete expressions are rather arbitrary [37,42,43,44,45,50,55]. Moreover, even the formula (1) should be changed to a more general and complex expression for the power that is absorbed or emitted by the medium [42,43,44,45,50,55]. 3. Ion Dynamics in Statistical and Spectral Characteristics of Stark Profiles Within the assumptions of ST the plasma ions are considered as static [17,18,21], and hence the resultant Stark profiles are called static or quasistatic. We now consider the ion dynamics effects, i.e., the deviations from the static Stark profiles due to the thermal motion (see, for example, [7,8,9,12,17,18,20,21,23,24,25,26,27,28,29,30,31,32,33,34,35,36,74,75,76]). If these deviations are small enough (called in earlier works “thermal corrections” [7,8,9,12,21,23,24,25]), then it is possible to express them through the second moments of ion microfield time derivatives of the joint distribution functions W( Atoms 02 00334 i027, Atoms 02 00334 i028, Atoms 02 00334 i029) of the ion electric microfield strength vector Atoms 02 00334 i027 and its first Atoms 02 00334 i028 and second Atoms 02 00334 i029 time derivatives [5,6,7,8,9,12,21,23,24,25,48,67]. The basic idea of Markov construction of these joint distributions is that Atoms 02 00334 i027, Atoms 02 00334 i028 and Atoms 02 00334 i029 are stochastic independent variables, formed by summation of electric fields or its derivatives over all individual ions of the medium [1,5,6]. So, these probability distribution functions are the many body objects [5,6,7,8,9,12,21,23,24,25,48,67]. However, those joint distributions possess nonzero constraint moments over, for example, microfield time derivatives, the value and direction of electric ion microfield strength vector being fixed [5,6,7,8,9,12,21,23,24,25,48,67]. So, each value of the ion microfield corresponds, for example, in fact to a nonzero “mean” value of the square of its derivative. In other words, it means that by fixing one of the initially independent stochastic variables, the mean values of the other ones become nonzero and functionally dependent on the value of the fixed variable [5,6]. So, the direct correlations between fixed stochastic variables with the moments over the other ones under this condition are evident. Another kind of correlations appears if one considers large values of ion microfields, which are produced by the nearest particle (so called “nearest neighbor approximation”) [5,6,7,8,9,12,21,23,24,25,48,67]. In this case there is a direct proportionality between the value of the ion electric field and its time derivative, where the stochasticity is involved due to another stochastic variable—particle velocity. Indeed, the mean square value of particle velocities is a necessary factor in the second moments over the microfield time derivatives [5,6,7,8,9,12,21,23,24,25,48,67]. Consider now the components of the microfield time derivative that are perpendicular and parallel to the direction of the ion microfield strength vector [5,6,7,8,9,12,21,23,24,25,48,67]. By calculating the second moments of the perpendicular and parallel components of time derivative, it is possible to establish relations between them in the limits of small and large reduced microfields values β = F/F0 (F0 is the Holtsmark normal field value [1,5,6,7,8,9,12,21,23,24,25,48,67] and F is the current microfield value), assuming for simplicity (but without loss of generality) that ions produce the Coulomb electric field. In the case of β ≪ 1, the ion microfield is formed by many distant ions and due to isotropy the following relation takes place [5,6,24,25]: Atoms 02 00334 i002 On the other hand, in the case of β ≫ 1, the ion microfield is formed by the nearest neighbor and the corresponding relation shows preferential direction of the microfield fluctuations along the microfield vector [5,6,24,25]: Atoms 02 00334 i003 The general expression for Atoms 02 00334 i030 in the case of a Coulomb electric field of point charges is Atoms 02 00334 i004 Here e is the elementary charge and Atoms 02 00334 i034 designates the mean of the radiator velocity square. The complex ionization composition is accounted for in Equation (4) (note the generalization of the F0 definition). The definitions of the mean values for composition of various ion species “s” with the charges Zs and thermal velocity vi,s are given by the following relations: Atoms 02 00334 i005 Similarly to Equation (4), an expression for the parallel component Atoms 02 00334 i031 is Atoms 02 00334 i006 In Equations (4) and (6), H(β) is the Holtsmark function (1). The other functions are related to it through integral or differential equations (see [5,6]) Atoms 02 00334 i007 The properties of asymptotics for these functions are given by the relations [3] Atoms 02 00334 i008 The results of [5,6] for joint distribution functions and their moments for the Coulomb potential were generalized within the Baranger–Mozer scheme, accounting for the electron Debye screening and the ion-ion correlations [38,48,67]. The quantities Atoms 02 00334 i031 and Atoms 02 00334 i030 characterize the statistical properties of plasma microfield and play a key role within the idea of thermal corrections to Stark profiles [7,8,9,12,24,25]. The terms proportional to the mean of radiator velocity square in Equations (4) and (6), in square brackets, describe the fluctuations of microfield, induced by the relative thermal motion of the radiator atoms. The other terms, proportional to the mean of radiator velocity square in Equations (4) and (6) besides the factor p are due to the effects of ion-dynamics friction on the radiator motion. As one can see, the latter terms could not be made proportional to the reduced mass of the ion-radiator pair. However, the cases when the influence of the effects of ion-dynamics friction on Stark profiles is significant have not been revealed up to now, since the corresponding deviations of profiles turn out to be small. And indeed, the full scale Molecular Dynamics (MD) simulations confirmed that the effects of ion dynamics could be well described by a so-called “reduced mass” model (RM), where the motion of radiator is neglected for moderately coupled plasma with the ion couplingparameter Γi ≤ 1 [34,61,75]. This greatly facilitates the study of ion dynamics in simulations, since the consistent consideration of radiator motion effects in MD is quite time-consuming. The expressions for the fluctuation rates, Equation (4) and Equation (6), also show that for plasmas with complex ion composition there could be some deviations from the RM model, caused by peculiar distributions of ion charges. As the main precision experiments have, up to now, been conducted for simple charge distributions, the expressions in Equation (5) and Equation (6) could be greatly simplified and the terms, corresponding to the ion friction, could be omitted. Hence the charge distribution is neglected below. By analyzing the difference between profiles formed for two different reduced masses of the plasma ions, it is seen that within the idea of thermal corrections they are proportional to the second moments of parallel Atoms 02 00334 i031 and perpendicular Atoms 02 00334 i030 components of the ion microfield fluctuations. The general analysis, performed in different approximations [7,8,9,12,24,25], has shown that the ion dynamical perturbations of Stark profiles are controlled by three main mechanisms. The first mechanism is due to the amplitude modulation, induced by the rotation of the atomic dipole along with the rotating ion microfield [12]. Due to the amplitude modulation, the projections of the atomic dipole on the coordinate axis, being at rest, are changing (or “modulated”), while the atomic dipole rotates together with the rotation of the electric microfield strength vector. The second mechanism exists due to the atomic dipole inertia with respect to the microfield rotation, and results in nonadiabatic transitions between states defined in the frame with the quantization axis along the rotating field direction [12]. The third mechanism (historically considered the first) is the phase modulation related to changes in the microfield magnitude [7,8,9,12,24,25]. Only this mechanism was taken into account in the earlier works (in the 1950s) [7,8,9], where the Stark broadening by ions was analyzed in the adiabatic approximation, i.e., only within the framework of phase modulation or frequency Stark shift [21]. As it was demonstrated in the works [24,25] within the approach of thermal corrections, the amplitude modulation gives the largest contribution to the Stark contour deformation due to ion dynamics in the vicinity of the line center. The general ideas of amplitude modulation, non-adiabatic effects and usage of the electron broadening for extending the theory of thermal corrections to the line center were proposed by Gennadii V. Sholin. Figure 1. Function M(x). Figure 1. Function M(x). Atoms 02 00334 g001 1024 Recall that the thermal corrections are defined as a difference between the total profile, calculated accounting for perturbations due to ion dynamics, and the Stark profile within the ST approach [24,25]. In the case of equal temperatures of plasma ions and radiators, as well as electrons, this gives for Ly-alpha (compare with [24,25]) in the approximation of isolated individual Stark components (i.e., neglecting non-diagonal elements of the electron impact broadening operator) Atoms 02 00334 i009 The f1−α(x) function describes the central component contribution to the Stark profile due to amplitude modulation, while the f2−α(x) function describes the contribution of lateral components to the Stark profile, related to the combined action of the amplitude modulation and non-adiabatic effects. Below, the explicit expressions for f1−α(x), f2−α(x) are given (Γ(z) is the gamma function): Atoms 02 00334 i010 In (9) and (10) the Atoms 02 00334 i032 dimensionless function is introduced (see Figure 1), proportional to the second moment of the microfield time derivative component, perpendicular to the microfield direction according to Equation (4) and defining in fact the mean square of microfield rotation frequency. It is defined in such a way, that M(0) = 1, while the corresponding constant in the limit of β ≪ 1 was included in the definition of the f2−α(x) function. The behavior of f1−α(x) and f2−α(x) is presented in the Figure 2 and Figure 3, respectively. The parameter γ in Equations (9) and (10) is the impact electron width of the central component in the parabolic basis. As it follows from the results of [24,25], the corrections due to the ion dynamics effects are negative in the center of a line with the central components, corresponding to decreasing of the intensity in the line center due to the ion dynamics effects and its increasing in the shoulders (the transient region between approximately the half width and the nearest line wings). As the thermal corrections have perturbative character, functions f1−α(x) and f2−α(x) have zero integrals. So, due to ion dynamics effects, the intensity is redistributed from the line center, increasing the total width of the lines with central Stark components [24,25]. These general features are confirmed below in the next Section 4 using MD simulations (see also [75]), that are believed to be not limited by applicability conditions of the perturbation approach [24,25]. It should be noted that within the approach of [24,25], an exact analytical expression due to the ion-dynamics corrections for Ly-alpha, accounting for the collapse of the lateral Stark components [13,14,15], was also derived [24]. It has a rather complex structure and is not presented here, but the comparison of its functional behavior with the approximation of isolated individual Stark components f2−α(x) is shown in Figure 3. Figure 2. Function f1−α(x). Figure 2. Function f1−α(x). Atoms 02 00334 g002 1024 Figure 3. Function f2−α(x)—dashed line. The solid line is the behavior of f2−α(x;ε = 1), obtained in numerical calculations accounting for the collapse effect [24]. Atoms 02 00334 g003 1024 It is interesting to note several properties of the above-mentioned complex function f2−α(x;ε), which takes into account the collapse effect of lateral Stark components (where ε is the ratio of the non-diagonal matrix element of the electron impact broadening operator to the electron impact width of the central component of Ly-alpha). Neglecting dependence of the non-diagonal matrix element of the electron impact broadening operator on the value of the ion microfield F according to [17,18], which mainly is important for large F for transition from the overlapping to isolated broadening regime of the Stark components (since ε(β) → 0 for β → ∞), corresponds to ε = 1. The ratio of the lateral component electron impact width to the central component electron impact width is equal to 2 for Ly-alpha in parabolic basis [17,18]. This is reflected in the argument of Atoms 02 00334 i032, whose value is taken in the pole of resolvent Atoms 02 00334 i035, corresponding to the one lateral component. So, remembering that the central component is more intense than the lateral ones, its strong influence on the Ly-alpha Stark shape becomes obvious. Comparing f2−α(x;ε = 0) with f2−α(x;ε = 1) at x = 0, their ratio comes out to be about 1.26 [24]. At first glance, putting ε = 0 in f2−α(x;ε) allows obtaining the limit of isolated Stark components, but it turns out that f2−α(x;ε = 0) ≠ f2−α(x). This means that there is no commutativity in the sequence of performed mathematical operations, since the f2−α(x;ε) function is obtained in the solution of secular problem and inverting the resolvent. It is seen in Figure 3, that the difference between the approximation of isolated components (f2−α(x)) and the exact solution accounting for the collapse effect (f2−α(x;ε = 1)) at x = 0 is noticeably smaller than that between f2−α(0;ε = 1) and f2−α(0;ε = 0), as their ratio f2−α(0;ε = 1)/f2−α(0) is only about ~1.14. Thus this comparison demonstrates an acceptable accuracy of the approximation of isolated individual Stark components [18,19] for calculations of the thermal corrections to the Ly-alpha Stark profile. Figure 4. The function f2−β(x). Figure 4. The function f2−β(x). Atoms 02 00334 g004 1024 Similar to Equations (9) and (10), result for Ly-beta is [24,25]. Atoms 02 00334 i011 where f2−β(x) describes the lateral components contribution to the Stark profile, comprising to the action of only non-adiabatic effects in the case of lines without the central component [24,25], Atoms 02 00334 i012 and “w” is the electron impact width of the Stark sublevel (002), designated by parabolic quantum numbers. The graph of f2−β(x) is presented in the Figure 4. The case of Lyman-beta illustrates that the ion dynamics effect increases the intensity in the center of the line without the central components and slightly decreases its width due to the lowering intensity in the nearest wings that is clearly seen in Figure 4. Expressions (9) and (12) are derived assuming the concrete numerical values of the Stark shifts and dipole matrix elements, calculated in the parabolic basis for corresponding Stark sublevels and components of considered transitions [17,18]. The above results are obtained analytically by the perturbation theory for non-Hermitian operators and with the analytical continuation of the microfield distribution function and the second moments of its time derivatives that were shown to possess analytical properties in the upper complex plane (see [24,25]). As it follows from the validity conditions of this approach, the main contribution to integrals, describing the amplitude modulation of lines with central components, give regions of detunings near the line center, where the argument of the universal functions, describing the second moments of derivatives, is small. Moreover, the principal term of the expansion, corresponding to the amplitude modulation of the central component, does not depend on the microfield, that allows for integrating over the microfield distribution analytically [24,25]. On the other hand, the principal terms of the expansion for the lateral components related to the amplitude modulation and non-adiabatic effects, are obtained analytically via integration in the upper half of the complex plane [24,25]. As it follows from the asymptotic properties of these functions in the region of small values of argument, the effective frequency of rotations is practically constant. Due to this, in [24,25] the value of the dimensionless function M(z) in Equations (9) and (11) was substituted for small values of argument near zero, corresponding to the line center. Moreover, as M(z) varies very slowly on the characteristic frequency scale in the line center, in [24,25] its variation in the numerical results of Equations (9) and (11) was neglected. It allowed to neglect the difference of values of Atoms 02 00334 i033 standing beside the terms, obtained due to residue in the various poles, and equate them in fact to the common constant due to the smallness of argument. Then summation of terms due to perturbation expansion leads to more simple formulas, which are expressed as the functions f, and finalized by introduction of some general scales for the Stark constants C and impact widths w. Moreover, the significant simplification of the result also is due to the constant output of the M(z) function in the region near the line center, where its argument is small. Then all the derivatives of M(z) turn out to be zero (or could be considered as higher order terms of expansion), thus one is left only with the derivatives of the dispersion functions in perturbation series (see [24,25]). However, from the principal point of view, it is important that functional behavior of M(z) is proportional to the fluctuation of microfield component perpendicular to the microfield direction in Equations (4) and (10). It could be kept in the final result which would then look more cumbersome in this case than expressions (11) and (12). Indeed, the result would contain the sum of contributions of each Stark component, determined by its values of the electron impact widths and Stark constants, being multiplied by M(z) from the different arguments, as explained just above (see explicit formulas for perturbation expansion, presented in [24,25]). In the case of a line without the central component, the ion dynamics corrections are positive in the line center (see Figure 4). Thus, the intensity in the center increases, while it is decreases in the nearest wings (see Figure 4). The results of [24,25] qualitatively confirm the experimental patterns, observed in [20,22,27,31]. Also in [24,25] the difference profiles, corresponding to two different values of the reduced mass, were considered and compared with the results of experiments [20,22] for the Balmer-beta line. It was shown that relative behavior of the thermal corrections versus wavelength detuning from the line center Δλ describes sufficiently accurately [24,25] the experimental results [20,22]. In [24,25] the difference profiles δR(Δλ), corresponding to the two different values of the reduced mass, were also considered. Within the notion of thermal corrections this difference is Atoms 02 00334 i013 In Equation (13) f(x) is the relative behavior of the difference profile in the line center. In [24,25] it was shown, that the dependences versus frequency detunings from the line center, spanned by the profile difference (10), coincide well with the corresponding experimental data for the Balmer-beta line, given in [20,22]. So, according to the results and ideas of [24,25] and the discussion above, the difference profiles are proportional to the statistical characteristics of microfield and, more precisely, to the characteristics of the microfield fluctuations, related to the microfield rotations (compare with [24,25]). That is why this property could in principal be used to study microfield statistics in experiments and simulations. In short time the ideas of [12,24,25] were accepted and the notion of dominating effect of microfield rotation into ion dynamics became widely spread [30,33]. Nowadays, the computer simulation technique has become a powerful tool for studying the physics of various non-stationary processes, and particularly plasma microfield ion dynamics effects. However, computer simulations are rather time-consuming and at present impractical for large-scale calculations. Thus, simultaneously with the computer simulations the development of model approaches, that accounted for the ion dynamical effects in an approximate manner, were carried on independently [32,33,34,35,36,37,39,41,46,49,53,54,56,59,60,61,62,63,64,68,69,70,71,72,73,74,75,76]. The first such model (actually, predating computer simulations) was MMM [14,15,28,29,52], later followed by various applications of the BID [37,71] and Frequency Fluctuation Model (FFM) [46,56,66] methods. Notably, neither of these models explicitly accounts for the effects of microfield rotation [24,25,30,33,53]. It is necessary now to consider the formal conditions of validity of thermal corrections approach [7,8,9,12,24,25]. The results of reference [12] are applicable only for the lateral Stark components and quasistatic ions, when Atoms 02 00334 i014 and in the spectral region of detunings from the line center Δω, corresponding to the line wings Atoms 02 00334 i015 On the other hand, the spectral region of applicability of the theoretical approach in [24,25] is expanded till the line center only due to the additional inclusion into consideration, besides the quasistatic ions -Equation (14), of the electron impact effect, that allowed to analyze the central Stark components, too. However, the applicability criteria of [24,25] are rather complicated and depend on the spectral region under consideration. For example, in the line center for the central Stark components the criterion of validity of [24] has the form Atoms 02 00334 i016 where ρD and ρW designate the Debye and Weisskopf radii, respectively (see [21,45,58]). It is seen that condition (16) (and other criteria from [24,25]) is difficult to fulfill, which somewhat limits the practical applicability of the theory. For the line wings, the results of [24,25] reproduce the results of the earlier work [12] under the criterion for the separate Stark components Atoms 02 00334 i017 where I(th)(Δω/CF0) is the thermal correction profile that represents, within the assumptions of [18,19], a sum of contributions from the amplitude modulation, non-adiabatic effects, and phase modulation, and H (Δω/CF0) is a microfield distribution function. The results of [12,24,25] proved the numerical predominance of amplitude modulation and non-adiabatic effects contributions over the phase modulation one in the line wings, where the perturbation approach of [12,24,25] is applicable practically for any plasma parameter. The magnitudes of amplitude modulation and non-adiabatic effects contributions are of the same order in this region of Stark profiles [12]. Moreover, the amplitude modulation and non-adiabatic effects contributions have the same sign, which is opposite to the sign of the phase modulation correction [12]. So, the cancellation of non-adiabatic and atom reorientation effects (amplitude modulation) does not take place, as was proposed earlier by Spitzer in his very instructive papers [2,3,4], and they play the dominant role in the line wings. 4. Ion Dynamics Modeling and Statistical-Dynamical Coupling The work during the preparation of SLSP workshops and along with their conduction revealed unexpected spread of results of various computational models, done for ion perturbers only (see for example [74,75,76]). In this respect, the study of directionality correlations, presented at SLSP-1 [65], inspired the authors to test whether the rotation effects really are responsible for a dominating contribution according to the predictions of [24,25]. To this end the Ly-α profile was calculated using a computer simulation (CS) method [57]. A one-component plasma (OCP) was assumed, consisting only of one type of ions. Furthermore, to avoid effects of plasma non-ideality such as the Debye screening, the ions were assumed moving along straight path trajectories. Time histories of the electric field Atoms 02 00334 i027(t), formed by the ions, were stored, to be used as an input when numerically solving the time-dependent Schrödinger equation of the hydrogen atom. It is instructive to separately analyze how changing the direction, and the magnitude of the microfield influence the line shape. Let us define “rotational” and “vibrational” microfields as Atoms 02 00334 i018 Atoms 02 00334 i019 respectively (compare with [75]), where F0 is again the Holtsmark normal field for singly charged ions [1]. The effect of varying reduced mass was modeled by enabling time dilation of the field histories. Evidently, the field that is changing slower by a factor of s, corresponds to that, formed by particles moving s times more slowly, i.e., with an s2-times larger reduced mass. We note that by reusing the field histories generated only once, any possible inaccuracy due to a finite statistical quality of the simulations, such as a deviation from the Holtsmark distribution of the field magnitudes [1], should be present in all calculations and thus, cancel out when the difference profiles are evaluated. The parameters of the base run (s = 1) were selected to correspond to protons (i.e., μ0 = 0.5) with the particle density N = 1017 cm3 and temperature T = 1 eV, while the additional runs with s = 2, 4, and 8 corresponded to μ = 2, 8, and 32, respectively. The resulting Ly-α profiles are presented in Figure 5. It is seen that the rotational microfield component has a significantly more pronounced effect on the total line shapes, while changing only the magnitude of the field while keeping its direction constant (the “vibrational” component) has only a minor effect on the shape of the lateral components. Evidently, with no change in the field direction, the central component remains the δ-function (not shown on Figure 5b). We note that the width of the central component due to the rotational microfield component increases when μ decreases. This is in a qualitative agreement with Equation (9). The Ly-α profiles calculated with the full microfields (Figure 6a) show a resembling behavior: the line HWHM, mostly determined by the central component, scales approximately inversely with s within the range of parameters assumed, while the shape of the lateral components remains mostly unchanged. We note that varying s may alternatively be interpreted as scaling the temperature according to T = T0/s2. The observed dependence is, thus, qualitatively similar to the Ly-α T-dependence inferred in an ion-dynamics study [76]. We note that the shape of the central component is practically Lorentzian. We now turn to analyzing the difference profiles defined in the spirit of Equation (13). However, the theory of thermal corrections [24,25] was derived perturbatively with the zero-order broadening due to the electron impact effect, while in the present CS calculations no electron broadening was included. Therefore, it is the ion-dynamical broadening itself that fulfills this role, and one should expect to find a self-similar solution. We checked it by calculating difference between the line shapes calculated with the time-dilation factors s and s′ = s + δs, keeping the δs/s ratio constant, and normalizing the frequency axis of the resulting difference profiles to the line width. In other words, the ion-induced HWHM wi plays the role of the electron impact one in Equation (9). Figure 5. CS Ly-α profiles, broadened by an OCP, assuming N = 1017 cm−3 and T = 1 eV. The ion radiator reduced mass μ = s2μ0, where μ0 = 0.5. (a) Line shapes influenced by the rotational field component (18); (b) Line shapes influenced by the vibrational field component (19). Atoms 02 00334 g005 1024 Although it is desirable to keep δs/s as “infinitesimal” as possible, in practice too small a ratio results in rather noisy profiles due to a finite accuracy of the simulations; for this reason, δs/s = 1/4, corresponding to δμ/μ = 9/16, was used. The results are shown in Figure 6b. Indeed, the normalized difference profiles remain practically the same over the 64-fold variation of μ tested. Furthermore, the profiles in the central region are qualitatively similar to the prediction of the theory of thermal corrections [24,25] (cf. Figure 2). It appears, however, that the functional form is rather close to Atoms 02 00334 i020 also shown in Figure 6b. It is easy to see that such a functional form corresponds to a difference between two Lorentzians, confirming the shape of the Ly-α central component inferred from our CS calculations. Figure 6. (a) CS full Ly-α profiles, broadened by an OCP, assuming N = 1017 cm−3 and T = 1 eV. The ion radiator reduced mass μ = s2μ0, where μ0 = 0.5; (b) Profiles differences between line shapes, calculated with s and sʹ = s + δs = 5/4s (i.e., δμ/μ = 9/16). The profile differences are scaled to the lineshape HWHM wi. Atoms 02 00334 g006 1024 5. Discussion Let us discuss the separation of rotational and vibrational (phase modulation) effects of ion dynamics (used also in [75]). It is assumed that the mean magnitude of the field is equal to the normal Holtsmark field value. On the other hand, the solution for a fixed angular velocity and a fixed magnitude of the electric field for hydrogen is known exactly [10,13]. Furthermore, the solution of the Schrödinger equation for this problem and hence the profile strongly depends on the microfield value, which is set equal to F0. These profiles are characterized by well distinguished properties that are consequences of atom dynamics in the rotating microfield [10,13,75]. When the profile patterns are averaged over all microfield directional histories, the fixed value of the microfield leads to a statistical-dynamical coupling through specifics of solutions of the Schrödinger equation, i.e., coupling between the microfield statistics and specific dynamics of the atomic system [10,13]. This is illustrated by the instructive detailed patterns presented in [75] for the rotational contribution of ion dynamics effects. Let us now consider the proposed separation of phase modulation or vibrational effects. Here only the microfield orientation is assumed to be constant, while its magnitude as a function of time is preserved. The solution of the Schrödinger equation in this case reflects the specifics of a fixed orientation, and after averaging over microfield histories this imposes very characteristic features of the Stark profiles [7,8,75]. These profiles are also subject to statistical-dynamical coupling caused only by the fixed microfield orientation. The present CS results show that the convolution of the separated rotational and vibrational profiles does not equal to the one, obtained by using the full microfield histories. This is due to constrains, involved in the separation of the rotational and vibrational effects, and is clearly a consequence of the statistical-dynamical coupling. In fact, it has the same origin as the coupling between the fixed microfield values and the means of their derivatives [5,6], discussed in the beginning of Section 3. The analysis performed in the previous section has confirmed the formation of the central part of the hydrogen line with the central component, predominantly broadened under the action of the amplitude modulation. While in the quasistatic region of the Stark broadening the natural scaling is defined by CF0 ~ n2 Ni2/3, and in the impact regime it is proportional to ~n4 (Ti//µ)−1/2 Ni, from the dimension consideration it follows that in the regime of Stark broadening controlled by the ion dynamics the characteristic scale wi could be just proportional to (Ti//µ)1/2 Ni1/3 as there could be no dependence on the dipole atomic moment or microfield. On the right hand side in Figure 6b the difference profiles for the same set of artificial values of reduced mass are plotted. Their qualitative behavior is similar to the ones discussed in the Section 2 functions f1−α(x), f2−α(x). From the performed analysis in the previous section it could be deduced that the characteristic scale of HWHM is proportional to (2Ti/µ)1/2, and at the same time the analysis, given in [67], has shown that in this range of parameters (Ne ~ 1017 cm−3, T~ 1 eV) HWHM is proportional to Ni1/3. Combining these two dependences lead to the conclusion that, for the chosen plasma parameters, the HWHM needs to be proportional to the typical ion microfield frequency: wi ~ (2Ti/µ)1/2Ni1/3. These properties discovered in the process of CS could be in fact treated in a quite simple manner. Indeed, the main contribution to the broadening is due to the central component. The existence of the field and its orientation define those Stark sublevels that give rise to the central Stark component, but the microfield does not affect those states, and they do not depend on the microfield value since they do not possess a dipole moment. The microfield rotations change the quantization of quantum states, which can be considered as their decay or determination of their life time. Earlier, the formal model of this type with the decay rate, depending on the microfield value and based on equations like Equation (4), was suggested in [17], but was not thoroughly studied. From this idea it follows that any interaction should lead to a finite life time of the system. This hypothesis is supported by the observation of a nearly Lorentzian profile of the central component in the simulations with wi viNi1/3. 6. Conclusions The existence of spectral-kinetic coupling in the formation of Stark profiles is stated. It arises since the spectral profiles and balance equations could not, in general, be considered separately. A consistent approach should include balance equations as well as spectral line profiles in one system of equations for density matrix. MD simulations qualitatively confirm the results, obtained within the notion of thermal corrections, namely that the formation of the Stark profile center is mainly due to the microfield rotation, while the wings are affected by the phase modulation. Here, it is worth mentioning that in the line wings, where the theory of thermal corrections is practically always valid, the ion dynamics contributions of the amplitude modulation and the non-adiabatic effects have the same sign and significantly exceed numerically the contribution of the phase modulation, which has the opposite sign. The existence of the statistical-dynamical coupling between the plasma microfield statistics and the dynamics of atomic system of radiators, applied to the averaging of the dynamic solution over samples of histories of microfield evolution in plasma (which is a special case of time dependence in quantum mechanics induced by the environment [78]), may be the cause that prevents the convolution of separate contributions to the Stark profile from the rotational and vibrational effects to be equal to the Stark profile, obtained under the full microfield evolution. The difference profiles, obtained by a subtraction of experimental or simulated profiles, corresponding to two different reduced masses of perturber-radiator pair, could be used as a tool for studying the statistical properties of microfields. It is pointed out that results of MD simulations of ion dynamics could be treated by a hypothetical model of the quantum states decay caused by the changes of quantization axes due to the microfield rotations. This may allow for explaining the success of a variety of models that consider neither the microfield rotation, nor the detailed evolution of the microfields. We hope that this work will inspire further studies of the ion dynamics effects on Stark profiles. We wish to thank A. Calisti, S. Ferri, M.A. Gigosos, M.A. Gonzalez, C.A. Iglesias and V.S. Lisitsa for many fruitful discussions on the subject. The work of A.V.D. was partially supported by the Russian Foundation for Basic Research (project No. 13-02-00812) and by the Council of the President of the Russian Federation for Support of Young Scientists and Leading Scientific Schools (project No. NSh-3328.2014.2). The work of E.S. was partially supported by Israel Science Foundation and the Cornell University Excellence Center. A.V.D. greatly appreciates the invitations and support of the Organizing Committee of the SLSP-1&2 workshops and the International Atomic Energy Agency that made possible his participation in these meetings. Author Contributions Section 1, Section 2 and Section 3 were prepared by A.V.D. The computer simulations, described in Section 4, were performed by E.S. Both authors contributed equally to the rest of this work. Conflicts of Interest The authors declare no conflict of interest. 1. Holtsmark, J. Über die Verbreiterung von Spektrallinien. Ann. Phys. (Leipz.) 1919, 58, 577–630. [Google Scholar] [CrossRef] 2. Spitzer, L. Stark-Effect broadening of hydrogen lines. I Single encounters. Phys. Rev. 1939, 55, 699–708. [Google Scholar] 3. Spitzer, L. II. Observable profiles. Phys. Rev. 1939, 56, 39–47. [Google Scholar] [CrossRef] 4. Spitzer, L. Impact broadening of spectral lines. Phys. Rev. 1940, 58, 348–357. [Google Scholar] [CrossRef] 5. Chandrasekhar, S.; von Neumann, J. The statistics of gravitational field arising from random distributions of stars. I. The speed of fluctuations. Astrophys. J. 1942, 95, 489–531. [Google Scholar] [CrossRef] 6. Chandrasekhar, S.; von Neumann, J. II. The speed of fluctuations; dynamical friction; spatial correlations. Astrophys. J. 1943, 97, 1–27. [Google Scholar] [CrossRef] 7. Kogan, V.I. Broadening of Spectral Lines in Hot Plasma. In Plasma Physics and the Problem of Controlled Thermonuclear Reactions; Leontovich, M.A., Ed.; Academy of Science USSR Press: Moscow, Russia, 1958; Volume IV, pp. 259–304. [Google Scholar] 8. Kogan, V.I. Broadening of Spectral Lines in Hot Plasma. In Plasma Physics and the Problem of Controlled Thermonuclear Reactions; Leontovich, M.A., Ed.; Pergamon Press: London, UK, 1960; Volume IV, p. 305. [Google Scholar] 9. Wimmel, H.K. Statistical ion broadening in plasmas. J. Quant. Spectrosc. Radiat. Transf. 1960, 1, 1–29. [Google Scholar] 10. Wimmel, H.K. Erratum. J. Quant. Spectrosc. Radiat. Transf. 1964, 4, 497–499. [Google Scholar] [CrossRef] 11. Ishimura, T. Stark effect of the Lyman alpha line by a rotating electric field. J. Phys. Soc. Jpn. 1967, 23, 422–429. [Google Scholar] [CrossRef] 12. Kogan, V.I.; Selidovkin, A.D. On fluctuating microfield in system of charged particles. Beitr. Aus. Plasmaphys. 1969, 9, 199–216. [Google Scholar] [CrossRef] 13. Sholin, G.V.; Lisitsa, V.S.; Kogan, V.I. Amplitude modulation and non-adiabaticity in the Stark broadening of hydrogen lines in a plasma. Sov. Phys. JETP 1971, 32, 758–765. [Google Scholar] 14. Lisitsa, V.S. Hydrogen Atom in Rotating Electric Field. Opt. Spectrosc. USSR 1971, 31, 468. [Google Scholar] 15. Frisch, U.; Brissaud, A. Theory of Stark broadening—I. Soluble scalar model as a test. J. Quant. Spectrosc. Radiat. Transf. 1971, 11, 1753–1766. [Google Scholar] [CrossRef] 16. Brissaud, A.; Frisch, U. —Exact line profile with model microfield. J. Quant. Spectrosc. Radiat. Transf. 1971, 11, 1767–1783. [Google Scholar] [CrossRef] 17. Strekalov, M.L.; Burshtein, A.I. Collapse of shock-broadened multiplets. JETP 1972, 34, 53–58. [Google Scholar] 18. Sholin, G.V.; Demura, A.V.; Lisitsa, V.S. Theory of Stark broadening of hydrogen lines in plasma. Sov. Phys. J. Exp. Theor. Phys. 1973, 37, 1057–1065. [Google Scholar] 19. Sholin, G.V.; Demura, A.V.; Lisitsa, V.S. Electron. Impact Broadening of Stark Sublevels of Hydrogen Atom in Plasmas; Preprint IAE-2332, Kurchatov Institute of Atomic Energy: Moscow, Russia, 1972; pp. 1–21. [Google Scholar] 20. Vidal, C.R.; Cooper, J.; Smith, E.W. Hydrogen Stark-broadening tables. Astrophys. J. Suppl. Ser. 1973, 25, 37–136. [Google Scholar] [CrossRef] 21. Kelleher, D.E.; Wiese, W.L. Observation of ion motion in hydrogen Stark profiles. Phys. Rev. Lett. 1973, 31, 1431–1434. [Google Scholar] [CrossRef] 22. Griem, H.R. Spectral Line Broadening by Plasmas; Academic Press: New York, NY, USA, 1974. [Google Scholar] 23. Wiese, W.L.; Kelleher, D.E.; Helbig, V. Variation in Balmer-line Stark profiles with atom-ion reduced mass. Phys. Rev. A 1975, 11, 1854–1864. [Google Scholar] [CrossRef] 24. Demura, A.V.; Lisitsa, V.S.; Sholin, G.V. On the ion motion effect in Stark profiles of hydrogen lines in a plasma. In Proceedings of the XIIth ICPIG, Eindhoven, The Netherlands, 1975; p. 37. 25. Demura, A.V.; Lisitsa, V.S.; Sholin, G.V. Theory of Thermal Corrections to Stark Profiles of Hydrogen Spectral Lines; Preprint IAE-2672; Kurchatov Institute of Atomic Energy: Moscow, Russia, 1976; pp. 1–47. [Google Scholar] 26. Demura, A.V.; Lisitsa, V.S.; Sholin, G.V. Effect of reduced mass in Stark broadening of hydrogen lines. Sov. Phys. J. Exp. Theor. Phys. 1977, 46, 209–215. [Google Scholar] 27. Gruntzmacher, K.; Wende, B. Discrepancies between the Stark broadening theories for hydrogen and measurements of Ly-α Stark profiles in a dense equilibrium plasma. Phys. Rev. A 1977, 16, 243–246. [Google Scholar] [CrossRef] 28. Seidel, J. Hydrogen Stark broadening by model electronic microfields. Z. Naturforschung. 1977, 32, 1195–1206. [Google Scholar] 29. Seidel, J. Effects of ion motion on hydrogen Stark profiles. Z. Naturforschung. 1977, 32, 1207–1214. [Google Scholar] 30. Voslamber, D. Effect of emitter-ion dynamics on the line core of Lyman-α. Phys. Lett. A 1977, 61, 27–29. [Google Scholar] [CrossRef] 31. Gruntzmacher, K.; Wende, B. Stark broadening of the hydrogen resonance line Lβ in a dense equilibrium plasma. Phys. Rev. A 1978, 18, 2140–2149. [Google Scholar] [CrossRef] 32. Stamm, R.; Voslamber, D. On the role of ion dynamics in the Stark broadening of hydrogen lines. J. Quant. Spectrosc. Radiat. Transf. 1979, 22, 599–609. [Google Scholar] [CrossRef] 33. Voslamber, D.; Stamm, R. Influence of different ion dynamical effects on Lyman lines. In Spectral Line Shapes Volume 1; Wende, B., Ed.; Walter de Gruyter & Co.: Berlin, Germany, 1981; pp. 63–72. [Google Scholar] 34. Seidel, J.; Stamm, R. Effects of radiator motion on plasma-broadened hydrogen Lyman-β. J. Quant. Spectrosc. Radiat. Transf. 1982, 27, 499–503. [Google Scholar] [CrossRef] 35. Stamm, R.; Talin, B.; Pollock, E.L.; Iglesias, C.A. Ion-dynamics effects on the line shapes of hydrogenic emitters in plasmas. Phys. Rev. A 1986, 34, 4144–4152. [Google Scholar] [CrossRef] 36. Calisti, A.; Stamm, R.; Talin, B. Effect of the ion microfield fluctuations on the Lyman-α fine-structure doublet of hydrogenic ions in dense plasmas. Europhys. Lett. 1987, 4, 1003–1008. [Google Scholar] [CrossRef] 37. Boercker, D.B.; Dufty, J.W.; Iglesias, C.A. Radiative and transport properties of ions in strongly coupled plasmas. Phys. Rev. A 1987, 36, 2254. [Google Scholar] [CrossRef] 38. Demura, A.V. Theory of Joint Distribution Functions of Ion. Microfield and Its Space and Time Derivatives in Plasma with Complex. Ionization Composition; Preprint IAE-4632/6; Kurchatov Institute of Atomic Energy: Moscow, Russia, 1988; pp. 1–17. [Google Scholar] 39. Calisti, A.; Stamm, R.; Talin, B. Simulation calculation of the ion-dynamic effect on overlapping neutral helium lines. Phys. Rev. A 1988, 38, 4883–4886. [Google Scholar] [CrossRef] 40. Demura, A.V. Microfield Fluctuations in Plasma with Low Frequency Oscillations. In XIXth ICPIG Contributed Papers; Labat, J.M., Ed.; Faculty of Physics, University of Belgrade: Belgrade, 1990; Volume 2, pp. 352–353. [Google Scholar] 41. Calisti, A.; Khelfaoui, F.; Stamm, R.; Talin, B.; Lee, R.W. Model for the line shapes of complex ions in hot and dense plasmas. Phys. Rev. A 1990, 42, 5433–5440. [Google Scholar] [CrossRef] 42. Rautian, S.G.; Shalagin, A.M. Kinetic Problems of Nonlinear Spectroscopy; North Holland: New York, NY, USA, 1991. [Google Scholar] 43. Anufrienko, A.V.; Godunov, A.L.; Demura, A.V.; Zemtsov, Y.K.; Lisitsa, V.S.; Starostin, A.N.; Taran, M.D.; Shchipakov, V.A. Nonlinear interference effects in Stark broadening of ion lines in a dense plasma. Sov. Phys. J. Exp. Theor. Phys 1990, 71, 728–741. [Google Scholar] 44. Anufrienko, A.V.; Bulyshev, A.E.; Godunov, A.L.; Demura, A.V.; Zemtsov, Y.K.; Lisitsa, V.S.; Starostin, A.N. Nonlinear interference effects and ion dynamics in the kinetic theory of Stark broadening of the spectral lines of multicharged ions in a dense plasma. JETP 1993, 76, 219–228. [Google Scholar] 45. Sobelman, I.I.; Vainstein, L.A.; Yukov, E.A. Excitation of Atoms and Broadening of Spectral Lines; Springer: Heidelberg, Germany; New York, NY, USA, 1995. [Google Scholar] 46. Talin, B.; Calisti, A.; Godbert, L.; Stamm, R.; Lee, R.W.; Klein, L. Frequency-fluctuation model for line-shape calculations in plasma spectroscopy. Phys. Rev. A 1995, 51, 1918–1928. [Google Scholar] [CrossRef] 47. Gigosos, M.A.; Cardenoso, V. New plasma diagnosis tables of hydrogen Stark broadening including ion dynamics. J. Phys. B 1996, 29, 4795–4838. [Google Scholar] [CrossRef] 48. Demura, A.V. Instantaneous joint distribution of ion microfield and its time derivatives and effects of dynamical friction in plasmas. J. Exp. Theor. Phys. 1996, 83, 60–72. [Google Scholar] 49. Alexiou, S.; Calisti, A.; Gautier, P.; Klein, L.; Leboucher-Dalimier, E.; Lee, R.W.; Stamm, R.; Talin, B. Aspects of plasma spectroscopy: Recent advances. J. Quant. Spectrosc. Radiat. Transf. 1997, 58, 399–413. [Google Scholar] [CrossRef] 50. Kosarev, I.N.; Stehle, C.; Feautrier, N.; Demura, A.V.; Lisitsa, V.S. Interference of radiating states and ion dynamics in spectral line broadening. J. Phys. B 1997, 30, 215–236. [Google Scholar] [CrossRef] 51. Griem, H. Principles of Plasma Spectroscopy; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar] 52. Stehle, C.; Hutcheon, R. Extensive tabulation of Stark broadened hydrogen line profiles. Astron. Astrophys. Suppl. Ser. 1999, 140, 93–97. [Google Scholar] [CrossRef] 53. Barbes, A.; Gigosos, M.A.; Gonzalez, M.A. Analysis of the coupling between impact and quasistatic field mechanisms in Stark broadening. J. Quant. Spectrosc. Radiat. Transf. 2001, 68, 679–688. [Google Scholar] [CrossRef] 54. Gigosos, M.A.; Gonzalez, M.A.; Cardenoso, V. Computer simulated Balmer-alpha, -beta and -gamma Stark line profiles for non-equilibrium plasma diagnostics. Spectrochim. Acta B 2003, 58, 1489–1504. [Google Scholar] [CrossRef] 55. Demura, A.V.; Rosmej, F.B.; Stamm, R. Density Matrix Approach to Description of Doubly Excited States in Dense Plasmas. In Spectral Line Shapes; (18th International Conference on Spectral Line Shapes, Auburn, Alabama 4-9 June 2006); Oks, E., Pindzola, M., Eds.; AIP Conference Proceedings vol. 874; AIP: Melville, NY, USA, 2006; pp. 112–126. [Google Scholar] 56. Calisti, A.; Ferri, S.; Mosse, C.; Talin, B. Modélisation des profils de raie dans les plasmas: PPP—Nouvelle version. J. Phys. IV Fr. 2006, 138, 95–103. [Google Scholar] [CrossRef] 57. Stambulchik, E.; Maron, Y. A study of ion-dynamics and correlation effects for spectral line broadening in plasma: K-shell lines. J. Quant. Spectrosc. Radiat. Transf. 2006, 99, 730–749. [Google Scholar] [CrossRef] 58. Oks, E.A. Stark Broadening of Hydrogen and Hydrogenlike. Spectral Lines in Plasmas. The Physical Insight; Alpha Science International Ltd.: Oxford, UK, 2006. [Google Scholar] 59. Calisti, A.; Ferri, S.; Mosse, C.; Talin, B.; Lisitsa, V.; Bureyeva, L.; Gigosos, M.A.; Gonzalez, M.A.; del Rio Gaztelurrutia, T.; Dufty, J.W. Slow and fast micro-field components in warm and dense hydrogen plasmas. ArXiv e-prints 2007. [arXiv:physics.plasm-ph/0710.2091]. [Google Scholar] 60. Calisti, A.; del Rio Gaztelurrutia, T.; Talin, B. Classical molecular dynamics model for coupled two component plasma. High Energy Density Phys. 2007, 3, 52–56. [Google Scholar] [CrossRef] 61. Ferri, S.; Calisti, A.; Mosse, C.; Talin, B.; Gigosos, M.A.; Gonzalez, M.A. Line shape modeling in warm and dense hydrogen plasma. High Energy Density Phys. 2007, 3, 81–85. [Google Scholar] [CrossRef] 62. Stambulchik, E.; Alexiou, S.; Griem, H.; Kepple, P.C. Stark broadening of high principal quantum number hydrogen Balmer lines in low-density laboratory plasmas. Phys. Rev. A 2007, 75, 016401. [Google Scholar] 63. Calisti, A.; Ferri, S.; Talin, B. Classical molecular dynamics model for coupled two component plasma. High Energy Density Phys. 2009, 5, 307–311. [Google Scholar] [CrossRef] 64. Godbert-Mouret, L.; Rosato, J.; Capes, H.; Marandet, Y.; Ferri, S.; Koubiti, M.; Stamm, R.; Gonzalez, M.A.; Gigosos, M.A. Zeeman-Stark line shape codes including ion dynamics. High Energy Density Phys. 2009, 5, 162–165. [Google Scholar] [CrossRef] 65. Stambulchik, E.; Maron, Y. Plasma line broadening and computer simulations: A mini-review. High Energy Density Phys. 2010, 6, 9–14. [Google Scholar] [CrossRef] 66. Calisti, A.; Mosse, C.; Ferri, S.; Talin, B.; Rosmej, F.; Bureyeva, L.A.; Lisitsa, V.S. Dynamic Stark broadening as the Dicke narrowing effect. Phys. Rev. E 2010, 81, 016406. [Google Scholar] [CrossRef] 67. Demura, A.V. Physical Models of Plasma Microfield. Int. J. Spectrosc. 2010, 671073:1–671073:42. [Google Scholar] 68. Calisti, A.; Talin, B. Classical Molecular Dynamics Model for Coupled Two-Component Plasmas—Ionization Balance and Time Considerations. Contrib. Plasma Phys. 2011, 51, 524–528. [Google Scholar] [CrossRef] 69. Calisti, A.; Ferri, S.; Mosse, C.; Talin, B.; Gigosos, M.A.; Gonzalez, M.A. Microfields in hot dense hydrogen plasmas. High Energy Density Phys. 2011, 7, 197–202. [Google Scholar] [CrossRef] 70. Ferri, S.; Calisti, A.; Mosse, C.; Mouret, L.; Talin, B.; Gigosos, M.A.; Gonzalez, M.A.; Lisitsa, V. Frequency-fluctuation model applied to Stark-Zeeman spectral line shapes in plasmas. Phys. Rev. E 2011, 84, 026407. [Google Scholar] [CrossRef] 71. Mancini, R.C.; Iglesias, C.A.; Calisti, A.; Ferri, S.; Florido, R. The effect of improved satellite line shapes on the argon Heβ spectral feature. High Energy Density Phys. 2013, 9, 731–736. [Google Scholar] [CrossRef] 72. Iglesias, C.A. Efficient algorithms for stochastic Stark-profile calculations. High Energy Density Phys. 2013, 9, 209–221. [Google Scholar] [CrossRef] 73. Iglesias, C.A. Efficient algorithms for Stark-Zeeman spectral line shape calculations. High Energy Density Phys. 2013, 9, 737–744. [Google Scholar] [CrossRef] 74. Stambulchik, E. Review of the 1st Spectral Line Shapes in Plasmas code comparison workshop. High Energy Density Phys. 2013, 9, 528–534. [Google Scholar] [CrossRef] 75. Calisti, A.; Demura, A.; Gigosos, M.; Gonzalez-Herrero, D.; Iglesias, C.; Lisitsa, V.; Stambulchik, E. Influence of micro-field directionality on line shapes. Atoms 2014, 2, 259–276. [Google Scholar] [CrossRef] 76. Ferri, S.; Calisti, A.; Mossé, C.; Rosato, J.; Talin, B.; Alexiou, S.; Gigosos, M.A.; González, M.A.; González-Herrero, D.; Lara, N.; Gomez, T.; Iglesias, C.; Lorenzen, S.; Mancini, R.C.; Stambulchik, E. Ion Dynamics Effect on Stark-Broadened Line Shapes: A Cross-Comparison of Various Models. Atoms 2014, 2, 299–318. [Google Scholar] [CrossRef] 77. Alexiou, S.; Dimitrijevic, M.; Sahal-Brechot, S.; Stambulchik, E.; Duan, B.; Gonzalez-Herrero, D.; Gigosos, M.A. The second woorkshop on lineshape comparison: Isolated lines. Atoms 2014, 2, 157–177. [Google Scholar] [CrossRef] 78. Briggs, J.S.; Rost, J.M. Time dependence in quantum mechanics. Eur. Phys. J. D 2000, 10, 311–318. [Google Scholar] [CrossRef] Atoms EISSN 2218-2004 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert Back to Top
c112e36a100a3e63
Quantum Mechanics(QM) from Special Relativity(SR) A physical derivation of Quantum Mechanics (QM) using only the assumptions of Special Relativity (SR) as a starting point... Quantum Mechanics is not only compatible with Special Relativity, QM is derivable from SR! The SRQM Interpretation of Quantum Mechanics SiteMap of SciRealm | About John | Send email to John John's Science, Math, Philosophy Stuff: The Science Realm: John's Virtual Sci-Tech Universe |4-Vectors |Ambigrams |Antipodes |Covert Ops Fragments |Cyrillic Projector |Forced Induction (Sums Of Powers Of Integers) |Frontiers |IFS Fractals |JavaScript Graphics |JavaScript Graphics Using Table |Kid Science |Kryptos |Photography |Prime Sieve |QM from SR |QM from SR-Simple RoadMap |Quantum Phase |Quotes |RuneQuest Cipher Challenge |Secret Codes & Ciphers |Scientific Calculator |Science+Math |Sci-Pan Pantheist Poems |Stereograms |Turkish Grammar Welcome to Quantum Reality: Virtual worlds of imaginary particles: The dreams stuff is made of: Life, the eternal ghost in the machine... This site is dedicated to the quest for knowledge and wisdom, through science, mathematics, philosophy, invention, and technology.  May the treasures buried herein spark the fires of imagination like the twinkling jewels of celestial light crowning the midnight sky... Quantum Mechanics is derivable from Special Relativity See QM from SR-Simple RoadMap ***Learn, Discover, Explore*** Melike Wilson Art Site last modified: 2015-Nov The following is a derivation ofQuantum Mechanics (QM) fromSpecial Relativity (SR). It basically highlights the few extra physical assumptions necessary to generate QM given SR as the base assumption. The Axioms of QM are not required, they emerge instead as Principles of QM based on SR derivations. See also presentation as PDF: SRQM.pdf or as OpenOffice Presentation SRQM.odp There is also a more basic derivation at SRQM-RoadMap.html Also, see 4-Vectors & Lorentz Scalars Reference for lots more info on four-vectors (4-vectors) in general A lot of the texts on Quantum Mechanics that I have seen start with a few axioms, most of which are non-intuitive, and don't really seem to be related to anything in classical physics.  These assumptions then build up to the Schrödinger equation, at which point we have Quantum Mechanics.  In the more advanced chapters, the books will then say that we need a wavefunction that obeys Special Relativity, which the Schrödinger equation does not.  They then proceed by positing the Klein-Gordon and Dirac equations, saying that they are the relativistic versions of the Schrödinger equation.  It is then shown that these versions of QM agree very nicely with all of the requirements of SR, and that in fact, things like the spin-statistics theory come from the union of Quantum Mechanics and Special Relativity. But, one facet of quantum theory that has always intrigued me is this:Quantum Mechanics seems to join up very well with Special Relativity, but not with General Relativity (GR).  Why? Thinking along that line led me to the following ideas:  Why do the textbooks start with the QM Schrödinger equation, which is known to be non-relativistic, and then say that Klein-Gordon is the relativistic version? What if Quantum Mechanics can actually be derived from Special Relativity? If so, then one can more correctly state that the Schrödinger equation is actually the low-velocity (low-energy) limit of the Klein-Gordon equation, just as Newtonian physics is the low-velocity limit of Relativistic physics. Can you get Quantum Mechanics without the starting point of the standard QM axioms?  Can the axioms themselves actually be derived from something that makes a little more sense, that is a little more connected to other known physics?  So, starting with SR, and it's two simple axioms (Invariance of the Measurement Interval, Constancy of LightSpeed), what else do you actually need to get QM? Also, if it turned out that QM can be derived from SR, that would sort of explain that difficulties of making it join up with GR.  If quantum theory is derivable from a "flat" Minkowski space-time, then GR curvature effects are something above and beyond QM. So, let's us proceed from the following assumptions: GR is essentially correct, and SR is the "flat spacetime" limiting-case of GR. SR is "more correct" than classical mechanics, in that all classical mechanics is just the low-velocity limiting-case of SR. QM is not a "separate" theory which just happens to hook up nicely with SR,it may be derivable from SR. Anything posited as fundamental due to the Schrödinger equation is actually just the low-velocity approximation of the Klein-Gordon equation. The short summary goes like this: Start with GR SR is "flat spacetime" limiting-case of GR SR includes the following: Invariance of Interval Measure, Minkowski Spacetime, Poincare Invariance, Description of Physics by Tensors and4-Vectors, LightSpeed Constant (c) Standard SR 4-Vectors include: 4-Position, 4-Velocity, 4-Momentum,4-WaveVector, 4-Gradient. Relations between these SR 4-Vectors are found empirically, no QM axioms are necessary - The relations turn out to be Lorentz Invariant Scalars. These scalars include: Proper Time (τ), Particle Rest Mass (mo),Universal Action Constant (ћ), Imaginary Unit (i). Only this, and no axioms from QM, are enough to generate: SR/QM Plane Waves The Schrödinger RelationP = i ћ Operator formalism Unitary Evolution Non-zero Commutation of Position/Momentum Relativistic Harmonic Oscillation of Events The Klein-Gordon Relativistic Wave Equation The KG Equation implies a QM Superposition Principle because it is a*Linear* Wave PDE. The Schrödinger Equation is actually just the low-velocity approximation of the Klein-Gordon Equation. Once you have a Relativistic Wave Equation, you have QM. However, one can go even further... The Casimir Invariants of the Poincare Group give Mass *and* Spin - i.e.Spin comes not from QM, but from Poincare Invariance... Some more exotic SR 4-vectors include: 4-SpinMomentum, 4-VectorPotential, 4-CanonicalMomentum The 4-SpinMomentum is still a 4-Momentum, but includes the proper machinery for the particle to interact properly spin-wise with an external 4-VectorPotential The 4-VectorPotential is empirically related to the other 4-Vectors by a charge (q) At this point, we now have all the stuff (mass, charge, spin, 4-position,4-velocity) needed to describe a particle in spacetime. Lorentz Invariance of this 4-SpinMomentum gives a Relativistic Pauli equation. This Relativistic Pauli equation can be shown as the the source of all the usual QM equations: Dirac, Weyl, Maxwell, Pauli, Klein-Gordon, Schrödinger,etc. And at no point do we need quantum axioms - the principles of QM emerge from this formalism. Where do we start? Assume that Einstein's General Relativity (GR) is essentially correct. Consider the [Low Mass = {Curvature ~ 0}] limiting case. This gives Special Relativity < = > Minkowski Spacetime, which has the following properties: The Principle of Relativity: The requirement that the equations describing the Laws of Physics have the same form in all admissible Frames of Reference In other words, they have the same form for all Inertial Observers Mathematically this is Invariant Interval Measure ΔR·ΔR = (cΔt)2-Δr·Δr = (cΔt)2-| Δr| 2 = (cΔτ)2 = Invariant This is known as: Lorentz Invariance (Covariance) (for rotations and boosts {Aμ' = Λμ'ν Aν}) Poincaré Invariance (Covariance) (for rotations, boosts, and translations {Aμ' = Λμ'ν Aν + ΔAν}) "Flat" Spacetime = Minkowski Metric ημν = ημν = DiagnolMatrix[+1,-1,-1,-1], with ηαν ηνβ = δαβ which uses my preferred Metric Sign Convention ,see 4-Vectors & Lorentz Scalars Reference for reasoning behind this Elements of Minkowski Spacetime are Events (a time, a spatial location) These elements are represented by 4-Vectors , which are actually Tensors {technically (1,0)-Tensors} 4-Vector notation: A =     ...   = (a0,a) = (a0,a1,a2,a3) = > (at,ax,ay,az) Tensor notation:    Aμ = (aμ) = (a0,ai) = (a0,a1,a2,a3) = > (at,ax,ay,az) 4-Vectors can be used to describe Physical Laws Scalar Products of 4-Vectors give Invariant Lorentz Scalars {(0,0)-Tensors} ex. A·B = Aμ ημν Bν = A'·B' The Isometry group (the set of all "distance-preserving" maps) of Minkowski Spacetime is the Poincaré Group Poincaré Group Symmetry: (a non-Abelian Lie Group with 10 Generators)   = {1 time translation P0 + 3 space translations Pi + 3 rotations Ji + 3 boosts Ki} SL(2,C) x R1,3 is the Symmetry group of Minkowski Spacetime: i.e. the double cover of the Poincaré Group, as this includes particle symmetries Poincaré Algebra has 2 Casimir Invariants = Operators that commute with all of the Poincaré Generators These are { PμPμ = (m)2, WμWμ = -(m)2j(j+1) }, with Wμ = (-1/2)εμνρσJνρPσ is the Pauli-Lubanski pseudovector Casimir Invariant Eigenvalues  = { mass m , spin j }, hence mass *and* spin are purely SR phenomena, no QM axioms required This Representation of the Poincaré Group is known as Wigner's Classification in Particle_Physics_and_Representation_Theory Speed of Light c = Invariant Lorentz Scalar = constant Infinitesimal Invariant Interval Measure dR·dR = (cdτ)2 = (cdt)2-dr·dr = (cdt)2-| dr| 2 see also The Wightman Axioms Light Cone        | time-like interval(+)        |             / light-like interval(0)        |        c             --- space-like interval(-) \   future /   \    |    /     \  |  /     /  |  \         elsewhere   /    |    \ /   past    \        |        -c SR 4-Vectors - Conventions and Properties *Note* Numeric subscripts and superscripts on variables inside the vector parentheses typically represent tensor indices, not exponents In the following, I use the Time-0th-Positive SR metric sign convention ημν =   ημν =   DiagnolMatrix[+1,-1,-1,-1] I use this primarily because it reduces the number of minus signs in Lorentz Scalar Magnitudes, since there seem to be more time-like physical 4-vectors than space-like. Also, this sign convention is the one matched by the QM Schrödinger Relations later on... I always choose to have the 4-Vector refer to the upper index tensor of the same name. {eg.A = Aμ} In addition, I like the convention of having the (c) factor in the temporal part for correct dimensional units. {eg. 4-Position R = (ct,r)} This allows the SR 4-Vector name to match the classical 3-vector name, which is useful when considering Newtonian limiting cases. I will use UPPER case bold for 4-vectors, and lower case bold for 3-vectors. All SR 4-Vectors have the following properties: A = Aμ = (a0,ai) = (a0, a) = (a0,a1,a2,a3)     = > (at,ax,ay,az): A typical 4-vector        Aμ = (a0,ai) = (a0,-a) = (a0 ,a1, a2, a3)  = > (at, ax, ay, az)                            = (a0,-a) = (a0,-a1,-a2,-a3) = > (at,-ax,-ay,-az): A typical 4-covector where Aμ = ημνAν  and  Aμ = ημνAν: Tensor index lowering and raising with the Minkowski Metric A·B =   Aμ ημν Bν = Aν Bν = Aμ Bμ = +a0b0-a·b = +a0b0-a1b1-a2b2-a3b3 The Scalar Product relation, used to make Invariant Lorentz Scalars If the scalar product is between tensors with multiple indices, then one should use tensor indices for clarity, otherwise the equation remains ambiguous. {eg. U·Fμν = Uα·Fμν = ? = > UαηαμFμν =   UμFμν  or  UαηανFμν =   UνFμν } Importantly, A·B = (a0ob0o) = Ao·Bo and  A·A = (a0o)2 = Ao·Ao, the Lorentz Scalar Product can quite often be set to the "rest values" of the temporal component. This occurs when the 4-Vector A is Lorentz-Boosted to a frame in which the spatial component is zero:  A = (a0, a) == > Ao = (a0o, 0) [A·B > 0] --> Time-Like [A·B = 0] --> Light-Like / Photonic / Null [A·B < 0] --> Space-Like The Invariant Rest Value of the Temporal Component Rule: β = v/c = u/c 4-UnitTemporal T = γ(1,β) = U/c 4-Velocity U = γ(c,u) = cT Generic 4-Vector A = (a0,a) A·T = (a0, a)·γ(1,β) = γ(a0*1 - a·β) = γ(a0 - a·β) = (1)(a0o - a·0) = a0o A·T = a0o The Lorentz Scalar product of any 4-Vector with the 4-UnitTemporal gives the Invariant Rest Value of the Temporal Component. This makes sense from a vector viewpoint - you are taking the projection of the generic vector along a unit-length vector in the time direction. A·U = c*a0o The Lorentz Scalar product of any 4-Vector with the 4-Velocity gives c*Invariant Rest Value of the Temporal Component. It's the same thing, just multiplied by (c). I will call these ( A·T = a0o or A·U = c*a0o ) the "Invariant Rest Value of the Temporal Component Rule". This will get used extensively later on... There is an analogous relation with the 4-UnitSpatial. The Scalar Product-Gradient-Position Relation: 4-Position R = (ct,r) 4-Gradient = X = (t/c,-) Generic 4-Vector A = (a0,a), which is not a function of R A·R = (a0,a)·(ct,r) = (a0*ct - a·r) = Θ is equivalent to [Θ] = [A·R] = A { A·R = Θ } <==> { [Θ] = A } Let A·R = Θ [Θ] = [A·R] = [A]·R + [R] = (0) + Aκ·∂μ[Rν] = Aκ·ημν = Aκηκμημν = Aμημν = Aν = A Let [Θ] = A A·R = (a0,a)·(ct,r) = (a0*ct - a·r) = (t[Θ]/c*ct + [Θ]·r) = (t[Θ]*t + [Θ]·r) = Θ f = f(t,x)  ==>  df = (∂tf) dt + (∂xf) dx f = ∫df = ∫(∂tf) dt + ∫(∂xf) dx f ==> (∂tf) ∫dt + (∂xf) ∫dx = ∂tf *t + ∂xf *x   {if the partials are constants wrt. t and x, which was the condition from A not a function of R} This comes up in the SR Phase and SR Analytic Mechanics. Basis Representation & Independence (Manifest Covariance): When the components of the 4-vector { A }are in (time scalar,space 3-vector) form { (a0,a) }then the 4-vector is in spatial basis invariant form. Once you specify the spatial components individually, you have picked a basis or representation.  I indicate this by using { = > }. e.g. 4-Position X = (ct,x) = {Space Basis independent representation} = > (ct,x,y,z) = {Cartesian/rectangular representation} = > (ct,r,θ,z) = {Cylindrical representation} = > (ct,r,θ,φ) = {Spherical representation} These can all indicate the same 4-Vector, but the components of the 4-vector will vary in the different bases. Now, once you are in a space basis invariant form, e.g. X = (ct,x), you can still do a Lorentz boost and still have the same 4-Vector X. It is only when using 4-Vectors directly, (eg. X·Y, X+Y), that you have full Spacetime Basis Independence. Knowing this, we try to find as many relations as possible in 4-Vector and Tensor format, as these are applicable to all observers. Since the language of SR is beautifully expressed using 4-vectors, I will use that formalism. There are quite a few different variations of 4-vectorsthat can correctly describe SR. I use the one that has only real(non-complex) notation throughout SR. The imaginary unit ( i ) is introduced only at the last step, which gives QM. As you can note from the outline,there are only a few steps necessary. By the way, SR is an excellent approximation for the majority of the currently known universe, including on the surface of Earth.  It is only in the regions of extreme curvature,such as near a stellar surface or black hole, that GR is required. See the 4-Vectors Reference for more reasoning on the choice of notation, and for more on four-vectors in general. Interesting points include: *All events, at which there may or may not be particles, massive or massless, have a 4-velocity magnitude of c, the speed of light. *A number of particle properties are simply constants times another property. *Wave-particle duality occurs purely within SR - ex. Relativistic Optics,Relativistic Doppler Effect. *Fields occur purely due to SR Potential Momentum. *QM is generated simply by allowing the particles to have imaginary/complex components within spacetime. *The Quantum Superposition Principle, usually assumed as axiomatic, is a consequence of the Klein-Gordon equation being a linear wave PDE. Notation and Properties of SR 4-Vectors gμν = gμν = > ημν =  DiagnolMatrix[1,-1,-1,-1] Minkowski Spacetime Metric: This is the "flat"spacetime of SR All SR 4-Vectors have the following properties: A = Aμ = (at,ax,ay,az) = (a0,a1,a2,a3) = (a0,a)A typical 4-vector Aμ = (at,ax,ay,az) = (a0,a1,a2,a3) = (at,-ax,-ay,-az) = (a0,-a1,-a2,-a3) = (a0,-a)A typical 4-covector; we can always get the 4-vector form with Aμ = ημνAν A·B = ηuv Au Bν = AνBν = Aμ Bμ = +a0b0-a1b1-a2b2-a3b3 = +a0b0-a·b The Scalar Product relation,used to make Invariant Lorentz Scalars [A·B > 0] --> Time-Like [A·B = 0] --> Light-Like / Null [A·B < 0] --> Space-Like Useful Quantities γ[v] = 1 / √[1-(v/c)2] : Lorentz Scaling Factor (gamma factor) τ[v,t] = t / γ : Proper Time Sqrt[1+x] = √[1+x] ~ (1+x/2) for |x|<<1 : Math relation often used to simplify Relativistic eqns. to Newtonian eqns. Fundamental/Universal Physical Constants (Lorentz Scalars) c = Speed of Light ћ = h/2π = Planck's Reduced Const aka. Dirac's Const mo = Particle Rest Mass (varies with particle type) q = Particle Charge Fundamental/Universal Physical 4-Vectors (Lorentz Vectors) (this notation always places the c-factor in the time-like part, and the name goes with the space-like part) 4-Position R = (ct,r) 4-Velocity U = γ(c,u) 4-Momentum P = (E/c,p) = γmo(c,u) 4-CurrentDensity J = (cρ,j) = γρo(c,u) 4-WaveVector K = (ω/c,k) = (ω/c,nω/vphase) = (ω/c)(1,β) = (1/cT,n/λ) 4-Gradient = (t/c,-) = > (/ct,-/x,-/y,-/z) = (t/c,-x,-y,-z) 4-VectorPotential A = (φ/c,a) 4-PotentialMomentum Q = qA = q(φ/c,a) = (U/c,p)*includes effect of charge q* 4-TotalMomentum PT = (H/c,pT) = P +Q = P + qA 4-TotalGradient D = + iq/ћA = 4-Gradient + effects of Vector Potential Fundamental/Universal Relations R = R U = dR/dτ "4-Velocity is the derivative of 4-Position wrt. proper time" P = moU J = ρoU K = P / ћ = -iK Q = qA D = + iq/ћA "whereA is the (EM) vector potential and q is the (EM) charge" Derived Physical Constants (Scalar Products of Lorentz Vectors give Lorentz Scalars) R·R = (Δs)2 = (ct)2-r·r = (ct)2-|r|2 U·U = (c)2 P·P = (moc)2 :J·J = (ρoc)2 K·K = (moc/ћ)2 · = (-imoc/ћ)2 = -(moc/ћ)2 Now then, how do we get QM out of SR? Start with a special relativistic spacetime for which the invariant measurement interval is given byR·R = (Δs)2 = (ct)2-r·r = (ct)2-|r|2. This is just a "flat" Euclidean 3-space with an extra, reversed-sign dimension, time, added to it. This interval is Lorentz Invariant. In this convention, space-like intervals are (-)negative, time-like intervals are (+)positive, and light-like intervals are (0)null. One can say that the universe is the set of all possible events in spacetime. All of the Special Relativistic notation applies to the concept of events. Events are simply points in spacetime.  The measurement interval between points is an invariant. Now, let's examine the interesting events... There exist particles (which can carry information) that move about in this spacetime. Each particle is located at an event (a time and a place) 4-Position R = (ct,r). The factor of (c) is inserted in the time part to give the correct,consistent dimension of length to this 4-vector. In fact, every SR 4-vector has this constant c-factor to give consistent dimensions. A particle is simply a self-sustaining event, or more correctly a worldline of connected events, which "carries" information forward in time. The information that a particle can "carry" include mass, charge, any of the various hypercharges, spin, polarization, phase, frequency, energy, etc.. These are the particles' properties. Let these particles be able to move around within the spacetime. The 4-Velocity of an event is given by U = dR/dτ,or the total derivative of the 4-Position with respect to its Proper Time. This gives the 4-Velocity U = γ(c,u),where γ(v) = 1 / √[1-(v/c)2]. This particle, if its rest mass mo>0, moves only in the direction of +time along its own worldlineUworldline = (c,0). Interestingly, all stationary (v = 0) massive particles move into the future at c, the Speed of Light; If the particle has rest mass mo = 0, it moves in a null or light-like direction. This is neither along time nor along space, but"between" them. These light-like particles, with a v = c, have a 4-Velocity:Ulight-like = Infinite c(1,n), wheren is a unit space vector. Since this is rather undefined, we will use the 4-Wave Vector, introduced later, to describe photons. A particle only has a spatial velocityu with respect to another particle or an observer. We have the relation √(U·U) = c. This says that the magnitude of the 4-velocity is c, the speed of light. This result is general, massive or massless! What all this means is that all light-like particles live on the "surface"null-space of the Light Cone, between time and space, while all massive particles live within the "interior" the Light Cone. Light Cone         | time-like interval(+)                      /light-like interval(0)        |       c \   future /   \    |    /     \  | /            -- space-like interval(-)     / |  \        elsewhere   /    |    \ /   past    \        |       -c One of the basic properties of particles is that of mass. Each particle has a rest mass mo. Rest mass is simply the mass as measured in a frame at rest with respect to an observer. This mass, along with the velocity of a particle, gives 4-Momentum P = moU. Nature seems to indicate that one of the fundamental conservation laws is the Conservation of 4-Momentum. This comes from the idea that a system remains invariant under time or space translations in an isotropic, homogeneous universe. The sum of all particle 4-Momenta involved in a given interaction is constant; it has the same value before and after a given interaction. The 4-Momentum relationP = moU gives 4-Momentum P = (E/c,p) = moU = γmo(c,u). This gives the Einstein Mass-Energy relation, E = γmoc2,or E = mc2 where m = (γmo). Note that for light-like particles, the result using this formula is undefined since Elight-like = Infinite 0 c2. Presumably, the m = (γmo) factor must scale in some way(i.e. like a delta function) to give reasonable results. Also, there is a lot of confusion over whether m is the actual mass or not. A simple thought experiment clears this up.  Imagine an atom at rest,having rest mass mo. Now imagine an observer moving past the atom at near light speed. The apparent mass of the atom to the moving observer is m = (γmo). Now imagine this observer accelerating to ever greater speeds. The atom is sitting happy and unchanging in its own rest frame. However, once the observer is going fast enough, this apparent mass m = (γmo) could be made to exceed that necessary to create a black hole. As that would be an irreversible event, the gamma factor γ must simply be a measure of the relative velocities of the two events. So, the true measure of actual mass is just the rest mass mo. The energy of null/light-like particles can be obtained another way. It turns out that every photon (light particle) has associated with it a 4-WaveVector K = (ω/c,k), where ω = temporal angular frequency. Through the efforts of Planck, Einstein, and de Broglie, it was discovered thatK = P / ћ = (ω/c,k) = 1/ћ (E/c,p). We should note here that h (ћ = h/2π) is an empirical constant, which can be measured with no assumptions about QM, just as c is an empirical constant which can be measured with no assumptions about SR. Planck discovered h based on statistical-mechanics/thermodynamic considerations of the black-body problem. Einstein applied Planck's idea to photons in the photoelectric effect to give E = ћ ω and the idea of photons as particle quanta. de Broglie realized that every particle, massive or massless, has 3-vector momentum p = ћk. Putting it all together naturally produces 4-vector P = ћK = (E/c,p) = ћ(ω/c,k). Note also that the 4-WaveVector (a wave-like object) is just a constant, ћ,times the 4-Momentum (a particle-like object). This means that photons, or other massless quanta, can act like localized particles and massive quanta can act like non-localized waves. That gives the Mass-Energy relation for all kinds of particles, ( E = γmoc2 = ћ ω ), and also gives the relation for m = (γmo) = ω ћ/c2 = (γωo)ћ/c2. Note that massive particle would have rest frequency ωo , which would look like (γ ωo) to an observer, while massless particles simply have frequency ω.  This leads into the wave-particle duality aspect of nature, and we haven't even gotten to QM yet! Note: "There is a duality of particle and wave even in classical mechanics, but the particle is the senior partner, and the wave aspect has no opportunity to display its unique characteristics." - Goldstein,Classical Mechanics 2nd Ed., pg 489 (The relation between geometrical optics and wave mechanics using the Hamilton-Jacobi Theory). I need to emphasize here that the 4-WaveVector can exist as an entirely SR object (non-QM). It can be derived in terms of periodic motion, where families of surfaces move through space as time increases, or alternately,as families of hypersurfaces in spacetime, formed by all events passed by the wave surface. The 4-WaveVector is everywhere in the direction of propagation of the wave surfaces. From this structure, one obtains relativistic/wave optics, without ever mentioning QM.  I believe that there is more to the 4-WaveVector than other people have figured on (i.e. more importance to the overall phase Φ of the waves). More on that later... Also, the question always arises: What is waving? I assume that it is simply an internal property of a particle that happens to be cyclic. This would allow all particles to be "waves", or more precisely to have a cyclic period, without the need for a medium to be waving in. Also, note that the phase of the 4-WaveVector was not defined. Presumably, (2π) of 4-WaveVec's could have the same 4-vector K .  However, another interpretation could be the symmetry between 4-vectors and One-Forms, where the 4-vectors consist of "arrows" and one-forms consist of "parallel lines".  The length of arrow along the lines is the dot-product operation, which results in a Lorentz scalar number. Also, it is at this step that I believe a probabilistic description is being imposed on the physics. Spacetime Structure: Now, let's get to the really tough stuff. There is a thing called the 4-Gradient = μ = (t/c,-) = (t/c,-del) = > (/ct,-/x,-/y,-/z) = (t/c,-x,-y,-z) , where ∂ is the partial derivative function. It tells you about the changes/variations in the "surface" of spacetime. This 4-vector is significantly different from the others. It is a function that acts on a value, not a value itself. It also has a negative sign in the space component, for the upper tensor index, unlike the other "physical type" vectors. ·X = /ct[ct]+∇·x = t/t+∇·x = 4. This tells us the number of spacetime dimensions. When it is applied to the 4-CurrentDensity, it leads to the Conservation of Charge equation. ·J = /ct[cρ]+·j = ρ/t+·j = 0. This says that the change in charge-density with respect to time is balanced by the divergence or spatial flow of current-density. The same thing can be applied to particle 4-Momentum: ·P = /ct[E/c]+·p = (1/c2)E/t +·p = 0. E/t+c2·p = 0. This says that the change in energy with respect to time is balanced by the divergence or spatial flow of momentum. In fact, this is the 4-Vector Conservation of Momentum Law. Energy is neither created nor destroyed, only transported from place to place in the form of momentum. This is the strong, local form, of conservation - the continuity equation. Additionally,U·∂ = γ(∂/∂t +) = γ d/dt = d/dτ Showing that the derivative w.r.t. Proper Time is a Lorentz Scalar Invariant. The 4-Gradient = (t/c,-) = (t/c,-del) is an SR functional that gives the structure of Minkowski Spacetime. The Lorentz Scalar Product· = (t/c,-)·(t/c,-) = (t/c)2 -· gives the d'Alembertian equation / wave equation.  The d'Alembert operator is the Laplace operator of Minkowski Space.  Despite being a functional, the d'Alembertian is still a Lorentz Scalar Invariant. The Green's function G[X-X'] for the d'Alembertian is defined as (·)G[X-X'] = δ4[X-X'] So, given all the above, we have clearly shown that is SR4-vector, not something from QM. Now, let's perform some pure SR math based on our SR 4-vector knowledge 4-Gradient = (∂t/c,-)                       ∂∙∂ = (∂t/c)2∙∇ 4-PositionX = (ct,x)                              X∙X = ((ct)2 -x∙x) 4-VelocityU = γ(c,u)                             U∙U = γ2(c2 -u∙u) = (c)2 4-MomentumP = (E/c,p) = (Eo/c2)U      P∙P = (E/c)2 -p∙p = (Eo/c)2 4-WaveVectorK = (ω/c,k) = (ωo/c2)U    K∙K = (ω/c)2 -k∙k = (ωo/c)2 ∂∙X = (∂t/c,-)∙(ct,x) = (∂t/c[ct]-(-∇∙x)) = 1-(-3) = 4 U∙∂ = γ(c,u)∙(∂t/c,-) = γ(∂t+u∙∇) = γ(d/dt) = d/dτ [X] = (∂t/c,-)(ct,x) = (∂t/c[ct],-[x]) = Diag[1,-1] = ημν [K] = (∂t/c,-)(ω/c,k) = (∂t/c[ω/c],-[k]) = Diag[0,0] = [[0]] K∙X = (ω/c,k)∙(ct,x) = (ωt –k∙x) = Φ [K∙X] = [K]∙X+K∙∂[X] = K = [Φ] (∂∙∂)[K∙X] = ((∂t/c)2∙∇)(ωt–k∙x) = 0 (∂∙∂)[K∙X] = ∂∙([K∙X]) = ∂∙K = 0 Now, let's make a SR function f let f = ae^b(K∙X), which is just a simple exponential function of 4-vectors then[f] = (bK)ae^b(K∙X) = (bK)f and∂∙∂[f] = b2(K∙K)f = (bωo/c)2f Note that { b = -i } is an interesting choice – it leads to SR Plane Waves,which we observe empirically, e.g. EM Plane Waves... This gives: [f] = (-iK)ae^-i(K∙X) = (-iK)f [f] = (-iK)f = -iK Now comes Quantum Mechanics (QM)! Now then, based on empirical evidence: QM (and enhancements like QED and QFT) have given the correct calculation/approximation of more phenomena than any other theory, ever. We have the following simple relation: = -iK orK = i. This innocent-looking, very simple relation gives all of Standard QM. It does this in a number of ways, one of which is by providing the Schrödinger relationP = ћK = i ћ. In component form this is (E = i ћ /t) and (p = -iћ). These are the standard operators used in the Schrödinger/Klein-Gordon eqns (as well as other relativistic quantum field equations), which are the basic QM description of physical phenomena. This essentially gives the Operator Formalism, Unitary Evolution, and Wave Structure Axioms of QM, which governs how the state of a quantum system evolves in time. We have: [] = -iK:Operator Formalism = [ -i ]K:Unitary Evolution = -i [K:Wave Structure One also finds that SR events oscillate with a rest freq that is proportional to rest mass. U·∂ = γ(∂/∂t +u·∇) = γ d/dt = d/dτ d/dτ = U·∂ d/dτ = (-iK) d/dτ = (-i/ћP) d/dτ = (-imo/ћ U) d/dτ = (-imo/ћ )U·U d/dτ = (-imoc2/ћ) d/dτ = (-iωomoc2/ћωo) d/dτ = (-iωo) d2/dτ2 = -(ωo)2 Now, apply this to the 4-Position... d2X/dτ2 = -(ωo)2X This is the differential equation of a relativistic harmonic oscillator! Quantum events oscillate at their rest-frequency. Likewise for the momenta: d2P/dτ2 = -(ωo)2P Next, let's look at Quantum Commutation Relations...4-PositionX = (ct,x)4-Gradient = (∂t/c,-)Then, purely from math... ================== Let ψ be an arbitrary function.X[ψ] = Xψ,[ψ] = [ψ]X[[ψ]] = X∂[ψ][X[ψ]] = [Xψ] = [X]ψ +X∂[ψ] [Xψ]-X∂[ψ] = [X now with commutator notation [,X]ψ = [X And since ψ was an arbitrary function... [,X] = [X] [,X] = [X] = (∂t/c,-)[(ct,r)] = (∂t/c,-∂x,-∂y,-∂z)[(ct,x,y,z)] = Diag[1,-1,-1,-1] = ημν = Minkowski Metric [,X] = ημν = Minkowski Metric ================== At this point,we have established purely mathematically, that there is a non-zero commutation relation between the SR 4-Gradient and SR 4-PositionThen, from our empirical measurements...we find that ∂ = - i K so [,X] = ημν[-iK,X] = ημν - i [K,X] = ημν [K,X] = i ημν Then, from our empirical measurements...we know that K = (1/ћ)P [K,X] = i ημν[(1/ћ)P,X] = i ημν (1/ћ)[P,X] = i ημν [P,X] = i ћημν [Xμ,Pν] = - i ћημνand, looking at just the spatial part [xi,pj] = i ћ δijHence, we have derived the standard QM commutator rather than assume it as an axiom... Let's summarize a bit: We used the following relations:(particle/location-->movement/velocity-->mass/momentum-->wave duality-->spacetime structure) With the exception of 4-Velocity being the derivative of 4-Position, all of these relations are just constants times other 4-Vectors. R = (ct,r particle/location U = dR/dτ movement/velocity P = moU mass/momentum K = 1/ћP wave duality = -iK spacetime structure By applying the Scalar Product law to these relations, we get: U·U = (c)2 P·P = (moc)2 K·K = (moc/ћ)2 Let's look at that last equation. · = (/ct,-)·(/ct,-) = 2/c2t2-· = -(moc/ћ)2, 2/c2t2 = ·-(moc/ћ)2 This is the basic, free-particle, Klein-Gordon equation, the relativistic cousin of the Schrödinger equation! It is the relativistically-correct, quantum wave-equation for spinless (spin 0) particles. We have apparently discovered QM by multiplying with the imaginary unit, (i ). Essentially, it seems that allowing SR relativistic particles to move in an imaginary/complex space is what gives QM. At this point, you have the simplest relativistic quantum wave equation. The principle of quantum superposition follows from this, as this wave equation (a linear PDE) obeys the superposition principle. The quantum superposition axiom tells what are the allowable (possible)states of a given quantum system. I believe that the only other necessary postulate to really get all of standard QM is the probability interpretation of the wave function, and that likely is simply reinterpretation of the continuity equation,·J = /ct(cp) +·j = p/t +·j = 0, whereJ is taken to be a "particle"current density. The Klein-Gordon equation is more general than the Schrödinger equation,but simplifies to the Schrödinger equation in the (v/c)<<1 limit. Also, extensions into EM fields (or other types of relativistic potentials) can be made usingD = + iq/ћA whereA is the EM vector potential and q is the EM charge, and allowingD·D = -(moc/ћ)2 to be the more correct EM quantum wave equation. Now, let's back up a bit toP·P = (moc)2 P·P - (moc)2 = 0 (E/c)2 -p·p - (moc)2 = 0 E2 - c2p·p - (moc2)2 = 0 this can be factored into... [ E - cα·p - β(moc2) ] [ E + cα·p+ β(moc2) ] = 0 E andp are quantum operators, α and β are matrices which must obey αiβ = -βαiiαj = -αjαi, αi2 = β2 = I The left hand term can be set to 0 by itself, giving... [ E - cα·p - β(moc2) ] = 0, which is the Dirac equation, which is correct for spin 1/2 particles Let's back up to the 4-Momentum equation.  Momentum is not just a property of individual particles, but also of fields. These fields can be described by 4-vectors as well. One such relativistically invariant field is the 4-VectorPotential A,which is itself a function of 4-Position X. Typically, we deal with the  Electromagnetic (EM) 4-VectorPotential, but it could be any kind of relativistic charge potential... 4-VectorPotential A[X] = A[(ct,x)] = (φ/c,a) = (φ[(ct,x)]/c, a[(ct,x)]), where the [(ct,x)] means is a function of time t and position x. While a particle exists as a worldline over spacetime, the 4-VectorPotential exists over all spacetime. The 4-VectorPotential can carry energy and momentum, and interact with particles via their charge q. One may obtain the PotentialMomentum 4-vector by multiplying by a charge q,Q = qA The 4-TotalMomentum is then given by PT = P +Q This includes the momentum of particle and field, and it is the locally conserved quantity. 4-TotalMomentum PT = (H/c,pT),where these are the TotalEnergy = Hamiltonian and 3-TotalMomentum. P = PT -Q = moU Now working back, we can make our dynamic 4-Momentum more generally,including the effects of potentials. 4-Momentum P = (E/c,p) = (H/c - U/c,pT-pEM) = (H/c - qφ/c,pT - qa) The dynamic 4-momentum of a particle thus now has a component due to the 4-VectorPotential, and reverts back to the usual definition of 4-momentum in the case of zero 4-VectorPotential. Likewise, following the same path as before... K = P / ћ 4-WaveVector K = (ωT/c -(q/ћ)φ/c,kT - (q/ћ)a) = -iK 4-Gradient = (T/ct- (iq/ћ)φ/c,-T - (iq/ћ)a) = (t/c,-) Define 4-TotalGradient D = +iq/ћA This is the concept of "Minimal Coupling" Minimal Coupling can be extended all the way to non-Abelian gauge theories and can be used to write down all the interactions of the Standard Model of elementary particles physics between spin-1/2"matter particles" and spin-1 "force particles" Minimal Coupling applied to the Dirac Eqn. leads to the Spin Magnetic Moment-External Magnetic Field coupling W = -γeS·B, where γe = qe/me,the gyromagnetic ratio. The corrections to the anomalous magnetic moment come from minimal coupling applied to QED In addition, we can go back to the velocity formula: u = c2 (p)/(E) = c2 (pT - qa)/(H - qφ) Lagrangian/Hamiltonian Formalisms: The whole Lagrangian/Hamiltonian connection is given by the relativistic identity: ( γ - 1/γ ) = ( γβ2 ) Now multiply by your favorite Lorentz Scalars... In this case for a free relativistic particle ( γ - 1/γ )(P·U) = ( γβ2 )(P·U) ( γ - 1/γ )(moc2) = ( γβ2 )(moc2) ( γmoc2 - moc2/γ ) = γmoc2β2 ( γmoc2 - moc2/γ ) = γmov2 ( γmoc2 ) + (- moc2/γ ) = γmou·u ( γmoc2 ) + (- moc2/γ ) = (p·u )     ( H )   +       ( L)     = (p·u ) The Hamiltonian/Lagrangian connection falls right out Now, including the effects of the 4- Vector Potential A = (φ/c,a){ = (φEM/c,aEM) for EM potential } Momentum due to Potential Q = qA Total Momentum of system PT = Π = P +Q = P + qA = moU + qA = (H/c,pT) = (γmoc+q φ/c,γmou+qa) ·U = γ(φ -a·u ) = φo P·U = γ(E -p·u ) = Eo PT·U = Eo+ qφo = moc2+ qφo I assume the following: A = (φo/c2)U = (φ/c,a) = φo/c2  γ(c,u) = ( γφo/c,γφo/c2u) giving (φ = γφo anda = γφo/c2u) This is analogous to P = Eo/c2U ( γ - 1/γ )(PT·U) = ( γβ2 )(PT·U)   γ(PT·U) + -(PT·U)/γ  = ( γβ2 )(PT·U) γ(PT·U) + -(PT·U)/γ  = (pT·u) ( H ) + ( L ) = (pT·u) L = -(PT·U)/γ = -moc2/γ - qφ + qa·u H = γ(PT·U) = γmoc2 + qφ =   γmoc2+ qγφo =   γ(moc2 + qφo) H + L = pT·u L = -(PT·U)/γ L = -((P +QU)/γ L = -(P·U +Q·U)/γ L = -P·U/γ -Q·U L = -moU·U/γ - qA·U L = -moc2/γ - qA·U L = -moc2/γ - q(φ/c,a)·γ(c,u)/γ L = -moc2/γ - q(φ/c,a)·(c,u) L = -moc2/γ - q(φ -a·u) L = -moc2/γ - qφ + qa·u L = -moc2/γ - qφo L = -(moc2 + qφo)/γ H = γ(PT·U) H = γ((P +QU) H = γ(P·U +Q·U) H = γP·U + γQ·U H = γmoU·U + γqA·U H = γmoc2 + qγφo H = γmoc2 + qφ  assumingA = (φo/c2)U H = ( γβ2 + 1/γ )moc2 + qφ H = ( γmoβ2c2 + moc2/γ) + qφ H = ( γmov2 + moc2/γ) + qφ H =  p·u + moc2/γ  + qφ H = E + qφ H = ± c√[mo2c2+p2]+ qφ H = ± c√[mo2c2+(pT-qa)2]+ qφ H + L = γ(PT·U) - (PT·U)/γ (γ - 1/γ)(PT·U) ( γβ2 )(PT·U) ( γβ2 )(moc2 + qφo) (γmoβ2c2 + qγφoβ2) (γmou·uc2/c2 + qφoγu·u/c2) (γmou·u + qa·u)  assumingA = (φo/c2)U (p·u + qa·u) Let's now show that the Schrödinger equation is just the low energy limit of the Klein-Gordon equation. We now let the Klein-Gordon equation use the Total Gradient, so now our wave equation uses EM potentials. D·D = -(moc/ћ)2( + iq/ћA)·(+ iq/ћA) + (moc/ћ)2 = 0 letA' = (iq/ћ)A let M = (moc/ћ) then ( +A')·( +A') + (M)2 = 0 · +·A' + 2A'· +A'·A' + (M)2 = 0 now the trick is that factor of 2, it comes about by keeping track of tensor notation... a weakness of strict 4-vector notation let the 4-Vector potential be a conservative field, then·A  = 0 (·) + 2(A'·) + (A'·A') +(M)2 = 0 expanding to temporal/spatial components... ( ∂t2/c2-· ) +2(φ'/c ∂t/c -a'· ) + ( φ'2/c2-a'·a')  + (M)2 = 0 gathering like components ( ∂t2/c2 + 2φ'/c ∂t/c+  φ'2/c2 ) - (·  +2a'·  + a'·a' ) + (M)2 = 0 ( ∂t2 + 2φ'∂t +  φ'2 )- c2(·  + 2a'· + a'·a') + c2(M)2 = 0 ( ∂t + φ' )2 - c2( +a')2 + c2(M)2 = 0 multiply everything by (i ћ)2 (i ћ)2( ∂t + φ' )2 - c2(i ћ)2(+a' )2 + c2(iћ)2(M)2 = 0 put into suggestive form (i ћ)2( ∂t + φ' )2 = - c2(iћ)2(M)2 + c2(i ћ)2(+a' )2 (i ћ)2( ∂t + φ' )2 =   i2c2(iћ)2(M)2 + c2(i ћ)2(+a' )2 (i ћ)2( ∂t + φ' )2 =   i2c2(iћ)2(M)2 [1 + c2(i ћ)2(+a' )2/ i2c2(iћ)2(M)2 ] (i ћ)2( ∂t + φ' )2 =   i2c2(iћ)2(M)2 [1 + ( +a')2/ i2(M)2 ] take Sqrt of both sides (i ћ)( ∂t + φ' ) =   ic(i ћ)(M) Sqrt[1 + ( +a' )2/ i2(M)2] use Newtonian approx  Sqrt[1+x] ~ ±[1+x/2] for x<<1 (i ћ)( ∂t + φ' ) ~  ic(i ћ)(M) ±[1 + ( +a' )2/2 i2(M)2] (i ћ)( ∂t + φ' ) ~  ±[ic(i ћ)(M) + ic(i ћ)(M)(+a' )2/2 i2(M)2] (i ћ)( ∂t + φ' ) ~  ±[c(i2 ћ)(M) + c( ћ)(+a' )2/2(M) ] remember M = moc/ћ (i ћ)( ∂t + φ' ) ~  ±[c(i2 ћ)(moc/ћ)+c( ћ)( +a' )2/2(moc/ћ)] (i ћ)( ∂t + φ' ) ~  ±[c(i2)(moc) +(ћ)2( +a' )2/2(mo)] (i ћ)( ∂t + φ' ) ~  ±[-(moc2) +(ћ)2( +a' )2/(2mo)] remember A'EM = iq/ћAEM (i ћ)( ∂t + iq/ћφ ) ~  ±[-(moc2) +(ћ)2( + iq/ћa )2/2mo] (i ћ)( ∂t ) + (i ћ)(iq/ћ)(φ) ~  ±[-(moc2)+ (ћ)2( + iq/ћa)2/2mo ] (i ћ)( ∂t ) + (i2)(qφ ) ~  ±[-(moc2)+ (ћ)2( + iq/ћa)2/2mo ] (i ћ)( ∂t ) -(qφ ) ~  ±[-(moc2) +(ћ)2( + iq/ћa )2/2mo] (i ћ)( ∂t )  ~  (qφ )±[-(moc2)+ (ћ)2( + iq/ћa)2/2mo ] take the negative root (i ћ)( ∂t )  ~  (qφ ) + [(moc2)- (ћ)2( + iq/ћa)2/2mo ] (i ћ)( ∂t )  ~  (qφ ) + (moc2)- (ћ)2( + iq/ћa)2/2mo call (qφ ) + (moc2) = V[x] (i ћ)( ∂t )  ~  V[x] - (ћ)2( +iq/ћa )2/2mo typically the vector potential is zero in most non-relativistic settings (i ћ)( ∂t )  ~  V[x] - (ћ)2()2/2mo And there you have it, the Schrödinger Equation with a potential The assumptions for non-relativistic equation were: Conservative field A, then·A  = 0 ( +a' )2/ i2(M)2 = ( +a' )2/ i2(moc/ћ)2 = (ћ)2( +a' )2/i2(moc)2 is near zero i.e. (ћ)2( +a')2 << (moc)2, a good approximation for low-energy systems Arbitrarily chose vector potential a = 0 Or keep it around for a near-Pauli equation (we would just have to track spins, not included in this derivation) Note that the free particle solution· = -(moc/ћ)2is shown to be a limiting case for AEM = 0. Again, see the 4-Vectors Reference for more on this. Now, let's examine something interesting... · = -(moc / ћ)2: Klein-Gordon Relativistic Wave eqn. = -i/ћP ·(-i/ћP) = -(moc/ ћ)2 ·(P) = - i (moc)2/ ћ ·(P) = 0 - i (moc)2/ ћ but,·(P) = Re[·(P)], by definition, since the4-Divergence of any 4-Vector (even a Complex-valued one) must be Real so·(P) = 0 : The conservation of 4-Momentum (i.e. energy&momentum) for our Klein-Gordon relativistic particle. This is also the equation of continuity which leads to the probability interpretation in the Newtonian limit. So, the following assumptions within SR-Special Relativity lead toQM-Quantum Mechanics: R = (ct,r) Location of an event (i.e. a particle)within spacetime U = dR/dT Velocity of the event is the derivative of position with respect to Proper Time P = moU Momentum is just the Rest Mass of the particle times its velocity K = P A particle's wave vector is just the momentum divided by Planck's constant, but uncertain by a phase factor = -iK The change in spacetime corresponds to(-i) times the wave vector, whatever that means... D = + (iq/ћ)A The particle with minimal coupling interaction in a potential field Each relation may seem simple, but there is a lot of complexity generated by each level. It can be shown that the Klein-Gordon equation describes a non-local wave function, which "violates relativistic causality when used to describe particles localized to within more than a Compton wavelength,..."-Baym. The non-locality problem in QM is also the root of the EPR paradox. I suspect that all of these locality problems are generated by the last equation, where the factor of ( i ) is loaded into the works, although it could be at the wave-particle duality equation. Or perhaps we are just not interpreting the equations correctly since we derived everything from SR, which should obey its own relativistic causality. Let's examine the last relation on a quantum wave ket vector |V>: = -iK |V> = -iK |V> which gives time eqn .[∂/c∂t |V> = -iω/c |V>] and space eqn. [- |V> = -ik |V>] A solution to this equation is: |V> = vn e^(-iKn·R) |Vn> where vn is a real number, |Vn> is an eigenstate(stationary state) Generally, |V> can be a superposition of eigenstates |Vn> |V> = Sum [vn e^(-iKn·R) |Vn>]            n = 1 Going back to the 4-wave vector K, I believe that this is the part of the derivation of QM from SR that the quantum probabilistic interpretation becomes necessary. Since the 4-wave vector as given here does not define the phase relationship, there is some ambiguity or uncertainty in the description. Phase almost certainly plays some role. Again, presumably 2*Pi of 4-wave vectors could describe the same 4-momentum vector. Once one starts taking waves to be the primary description of a system, the particle aspect gets lost, or smeared out. Once the gradient operation is added to the mix, one gets what is essentially a diffusion equation for waves, in which the particle aspect is lost. Thus, a probabilistic interpretation is needed, showing that the particle is located somewhere/when in the spacetime, but can't quite be pinned down exactly. My bet is that if the phases could be found, the exact locations of particle events would arise. This remains a work in progress. Reference papers/books can be found in the 4-VectorsReference. Email me, especially if you notice errors or have interesting comments. Please, send comments to John The Science Realm: John'sVirtual Sci-Tech Universe John's Science & Math Stuff: | AboutJohn | Send email toJohn 4-Vectors | Ambigrams| Antipodes | CovertOps Fragments | CyrillicProjector |Forced Induction (Sums Of Powers OfIntegers) | Fractals | Frontiers| JavaScript Graphics | KidScience |Kryptos | PrimeSieve | QM from SR | QMfrom SR-Simple RoadMap | QuantumPhase | Quotes | RuneQuest Cipher Challenge | ScientificCalculator | Secret Codes& Ciphers | Science+Math|Sci-Pan Pantheist Poems | Stereograms| Turkish Grammar | Quantum Mechanics is derivable from Special Relativity See QM from SR-Simple RoadMap
8cf4a66c0b7a791e
Take the 2-minute tour × In order to calculate the cross-section of an interaction process the following formula is often used for first approximations: $$ \sigma = \frac {2\pi} {\hbar\,v_i} \left| M_{fi}\right|^2\varrho\left(E_f\right)\,V $$ $$ M_{fi} = \langle\psi_f|H_{int}|\psi_i\rangle $$ Very often plane waves are assumed for the final state and therefore the density of states is given by $$ \varrho\left(E_f\right) = \frac{\mathrm d n\left(E_f\right)}{\mathrm d E_f} = \frac{4\pi {p_f}^2}{\left(2\pi\hbar\right)^3}\frac V {v_f} $$ I understand the derivation of this equation in the context of the non relativistic Schrödinger equation. But why can I continue to use this formula in the relativistic limit: $v_i, v_f \to c\,,\quad p_f\approx E_f/c$. Very often books simply use this equation with matrix element derived from some relativistic theory, e.g. coupling factors and propagators from the Dirac equation or Electroweak interaction. How is this justified? Specific concerns: • Is Fermi's golden rule still valid in the relativistic limit? • Doesn't the density of final states has to be adapted in the relativistic limit? share|improve this question 1 Answer 1 up vote 5 down vote accepted Fermi's golden rule still applies in the relativistic limit, and can be rewritten in a Lorentz invariant fashion. Starting with the transition probability $$ W_{i\rightarrow f} = \frac{2\pi}{\hbar} |m_{if}|^2 \rho(E) \,,$$ to have $W$ Lorentz invariant we'd like both the matrix element $|m_{if}|^2$ and the density of final states $\rho(E)$ to be invariant. This can be done by shifting a few terms around. A little bit of handwaving to motivate it: The wave function $\psi$ (which is in the matrix element) has to be normalized by $\int |\psi|^2 dV = 1$, which gives us a density (of probability to encounter a particle) of $1/V$. Now, a boosted observer experiences length contraction of $1/\gamma$, which changes the density to $\gamma/V$. To obtain the correct probability again, we should re-normalize the wave function to $\psi' = \sqrt{\gamma}\,\psi $ by pulling the Lorentz factor out. So we intoduce a new matrix element $$|{\cal M}_{if}|^2 = |m_{if}|^2 \prod_{i=1}^n (2 \gamma_i m_i c^2) =|m_{if}|^2 \prod_{i=1}^n (2E_i)^2 $$ (this is for an $n$-body process). Now the transition probability (here in differential form) becomes: $$ dW = \frac{2\pi}{\hbar} \frac{|{\cal M}_{if}|^2}{ (2E_1)^2 (2E_2)^2 \cdots} \cdot \frac{1}{(2\pi\hbar)^{3n}} \, d^3p_1 \, d^3p_2 \, \cdots \delta({p_1}^\mu + {p_2}^\mu + \ldots - {p}^\mu ) $$ The delta function is there to ensure conservation of momentum and energy. Now we can regroup the terms: $$ \Rightarrow \quad dW = \frac{2\pi}{\hbar} \frac{|{\cal M}_{if}|^2}{ 2E_1 2E_2 \cdot \ldots} \cdot d_\mathrm{LIPS} $$ The density of states/"phase space" $d\rho$ is replaced by a relativistic version, sometimes called the Lorentz invariant phase space $d_\mathrm{LIPS}$, which is given by $$ d_\mathrm{LIPS} = \frac{1}{(2\pi\hbar)^{3n}} \prod_{i=1}^n \frac{d^3p_i}{ 2E_i } \delta\left(\prod_{i=1}^n {p_i}^\mu - {p}^\mu \right) \,. $$ The nice thing about the relativistic formula for $dW$ is that, in the case you are scattering particles off one another, it immediately shows us three important contributions: not only the matrix element and phase space, but also the flux factor $1/s$ (where $s = ({p_1}^\mu + {p_2}^\mu)^2$ is the Mandelstam variable, and in case the masses are negligible, $ s \approx 2 E $). This flux factor is responsible for the general $1/Q^2$ falling slope when you plot cross section over momentum transfer $Q = \sqrt{s}$, which comes entirely from relativistic kinematics. Hope this answers your questions. Here is a presentation (PDF) that sums it up, with an explicit proof that it is Lorentz invariant. share|improve this answer Your Answer
b32a646d340500d7
Time filter Source Type Yuquan, China News Article Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all A Florida State University high performance computing researcher has predicted a physical effect that would help physicists and astronomers provide fresh evidence of the correctness of Einstein’s general theory of relativity. Bin Chen, who works at the university’s Research Computing Center, describes the yet-to-be-observed effect in the paper “Probing the Gravitational Faraday Rotation Using Quasar X-ray Microlensing,” published November 17, 2015, in the journal Scientific Reports. “To be able to test general relativity is of crucial importance to physicists and astronomers,” Chen said. This testing is especially so in regions close to a black hole, according to Chen, because the current evidence for Einstein’s general relativity — light bending by the sun, for example — mainly comes from regions where the gravitational field is very weak, or regions far away from a black hole. Electromagnetism demonstrates that light is composed of oscillating electric and magnetic fields. Linearly polarized light is an electromagnetic wave whose electric and magnetic fields oscillate along fixed directions when the light travels through space. The gravitational Faraday effect, first predicted in the 1950s, theorizes that when linearly polarized light travels close to a spinning black hole, the orientation of its polarization rotates according to Einstein’s theory of general relativity. Currently, there is no practical way to detect gravitational Faraday rotation. In the paper, Chen predicts a new effect that can be used to detect the gravitational Faraday effect. His proposed observation requires monitoring the X-ray emissions from gravitationally lensed quasars. “This means that light from a cosmologically distant quasar will be deflected, or gravitationally lensed, by the intervening galaxy along the line of sight before arriving at an observer on the Earth,” said Chen of the phenomenon of gravitational lensing, which was predicted by Einstein in 1936. More than 100 gravitational lenses have been discovered so far. “Astronomers have recently found strong evidence showing that quasar X-ray emissions originate from regions very close to supermassive black holes, which are believed to reside at the center of many galaxies,” Chen said. “Gravitational Faraday rotation should leave its fingerprints on such compact regions close to a black hole. “Specifically, the observed X-ray polarization of a gravitationally microlensed quasar should vary rapidly with time if the gravitational Faraday effect indeed exists,” he said. “Therefore, monitoring the X-ray polarization of a gravitationally lensed quasar over time could verify the time dependence and the existence of the gravitational Faraday effect.” If detected, Chen’s effect — a derivative of the gravitational Faraday effect — would provide strong evidence of the correctness of Einstein’s general relativity theory in the “strong-field regime,” or an environment in close proximity to a black hole. Chen generated a simulation for the paper on the FSU Research Computing Center’s High-Performance Computing cluster — the second-largest computer cluster in Florida. News Article | April 12, 2016 This is the fifth installment in a series covering how scientists are updating popular molecular dynamics, quantum chemistry and quantum materials code to take advantage of hardware advances, such as the forthcoming Intel Xeon Phi processors. Quantum-mechanical materials and molecular modeling research is the science for materials modeling at the nanoscale. Quantum materials research examines elementary particles using a mathematical interpretation of the structure and interactions of matter. This research has a wide range of applications, such as studying molecular systems for material assemblies, small chemical systems and studying biological molecules. High performance computing (HPC) systems are required for complex quantum materials research, due to the amount of data and the computation power required for calculating mathematical formulas and generating images. Researchers use specialized software such as Quantum ESPRESSO and a variety of HPC software in conducting quantum materials research. Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves and pseudo potentials. Quantum ESPRESSO is coordinated by the Quantum ESPRESSO Foundation and has a growing world-wide user community in academic and industrial research. Its intensive use of dense mathematical routines makes it an ideal candidate for many-core architectures, such as the Intel Xeon Phi coprocessor. The Intel Parallel Computing Centers at Cineca and Lawrence Berkeley National Lab (LBNL) along with the National Energy Research Scientific Computing Center (NERSC) are at the forefront in using HPC software and modifying Quantum ESPRESSO (QE) code to take advantage of Intel Xeon processors and Intel Xeon Phi coprocessors used in quantum materials research. In addition to Quantum ESPRESSO, the teams use tools such as Intel compilers, libraries, Intel VTune and OpenMP in their work. The goal is to incorporate the changes they make to Quantum ESPRESSO into the public version of the code so that scientists can gain from the modification they have made to improve code optimization and parallelization without requiring researchers to manually modify legacy code. One example of how Cineca used Quantum ESPRESSO to study a real device of promising scientific and technological interest is the electrical conductivity of a PDI-FCN2 molecule. This study was conducted by Cineca in collaboration with the University of Bologna and the National Research Council of Italy - Institute for Nanoscience (CNR-NANO). The object of this study is a two-terminal device based on a PDI-FCN2, a molecule derived from perylene. This system is important for the study of the electron transport in single-molecule devices and the further development of a new generation of organic field-effect transistor (OFET). The simulated system is composed of two gold electrodes, each of them made by 162 golden atoms. Between the electrodes, there is a PDI-FCN2 molecule. The system is made of 390 atoms and 3852 electrons. The metallic nature of the leads also requires a fine sampling of the Brillouin Zone in the description of the electronic structure. This further increases the computational effort required to simulate this system. Figures 1 and 2 show the molecular structure and the results of the study. The quantum mechanical solution of the electronic problem for such a huge system is a big challenge and it requires a large HPC computational infrastructure, like the one available at Cineca, and all the scaling properties of Quantum ESPRESSO. Dr. Carlo Cavazzoni (Principal Investigator at the Cineca Intel Parallel Computing Center) states, “Based on the results obtained by this study, we will gain a deep understanding in the intimate conduction mechanisms of this type of organic devices, going a step forward in the direction of utilizing the new OFET technologies that soon will replace the traditional silicon devices. Quantum ESPRESSO and new supercomputing facilities will make possible our studies and the understanding of the physics of the devices that, in the future, will be the building blocks for new photovoltaics cells, next-generation displays and molecular computers.” Cineca supercomputer The Cineca team currently does their research using the Cineca FERMI BGQ supercomputer and an IBM NeXtScale cluster named Galileo based on Intel Xeon processors and Intel Xeon Phi coprocessors (768 E7120P). Cineca’s next HPC computer will be a Lenovo named Marconi with 2 PFlops with Intel Xeon processors E5 v4 in a first stage and 11 PFlops with the next-generation Intel Xeon Phi processors before fall of 2016. A third-stage system will include 4.5PFlops with future Intel Xeon processors and integrate an Intel Omni-Path interconnection. Cineca is engaged in many R&D projects relating to HPC developments. One of the most important is MaX, which is a center of excellence funded by the European Community, whose ambitions are to establish an infrastructure for the material scientists and to support the development of codes toward the exascale. According to Cavazzoni, “We always focused our work on the part of numerical algorithms on the Fourier transform (FFT) and on the linear algebra modules. We started to rethink the overall organization of the memory hierarchy and parallelization structure. In particular, we modified the code in order to implement a sort of hierarchical tiling of data structures. In order to do so, we had to deeply modify the distribution of the data structure in Quantum ESPRESSO. The following figure shows the high level QE hierarchy.” Cineca is tiling data structures to efficiently use the computing power of each node. They changed fine-grain parallelism in the QE FFT module by refactoring data distribution using task groups as shown in Figure 4. The move to a many core model required changing QE code to make it fit the structure of a single node efficiently and splitting QE code into intra-node and inter-node processes.  In their work with the new data layout, a single TILE of processes, inside a given taskgroup, contains all the G-vectors and subset of bands to compute full 3-D FFTs. The data tiling can be changed to best match the HPC system characteristic, to the limit (if node memory permits) of having a whole 3-D FFT performed by a single taskgroup locally to the node. The following example shows the results of a Car Parrinello simulation on a system of 32 water molecules. This plot shows the differences between the old implementation (blue) and the new one (red), enhancing a reduction of the time-to-solution. Different taskgroups distributions are shown in the plot. Simulations were obtained running on an Intel Xeon processor E5-2630 v3. Cavazzoni indicates, “The Intel exascale road-map allows for a smooth innovation path in the code, and a constant improvement of the performance and scalability. The availability of a large number of cores per node has made it possible to tune the different layers of parallelization. A good tiling of the different data structures permits us to efficiently tile the memory and computing power of each node, reducing the amount of communication and, thus, enhancing the performances. We changed the fine grain parallelism of QE and, in particular, the FFT module. Adopting different kind of data distribution (taskgroups) we achieved a good improvement in terms of performance (Figure 5). However, there is still room for improvement, in particular for the efficiency of the OpenMP multithreading that is now limited to 4-8 threads. This is because workloads that are too small can induce load unbalancing and then a large spinning time. Adopting OpenMP tasking strategies, we are expecting a considerable improvement of the shared memory parallelism based on the new task level parallelism which is implemented in OpenMP4. We have already done some tests that make us think that we can remove the bottleneck displayed by synchronous thread level parallelism.” The main focus of Lawrence Berkley National Lab (LBNL) working with the National Energy Research Scientific Computing Center (NERSC) is to advance the open-source quantum chemistry or materials codes on multicore high-performance computing systems. They are jointly optimizing a variety of codes including NWChem and Quantum ESPRESSO code. NERSC is a national supercomputing center that serves the supercomputing mission and data needs of the U.S. Department of Energy Office of Science. NERSC is part of the Lawrence Berkley National Laboratory adjacent to the University of California campus. NERSC is also experimenting with modifying Quantum ESPRESSO code, since it is one of the most commonly used codes on NERSC systems. According to Taylor Barnes, LBNL Hopper Fellow, “In particular, we are interested in improving the performance of hybrid Density Functional Theory (DFT) calculations within Quantum ESPRESSO. Hybrid DFT is often more accurate than other types of DFT, and can be especially important for performing simulations of systems like batteries and photovoltaic cells. Unfortunately, hybrid DFT is also much more computationally demanding and, thus, many of the calculations that we would like to perform are difficult or impossible to run on current machines.” One of the LBNL/NERSC strategies for improving the performance of hybrid calculations in Quantum ESPRESSO has been to refactor and modify the hybrid sections of the code. Barnes states, “In doing so, we have made significant changes to both the communication and parallelization strategies, leading to large improvements in the code’s strong scaling efficiency.” Another focus of the LBNL/NERSC efforts is the investigation of improved ways to handle the parallelization of the fast FFTs, which are an integral part of any calculation in Quantum ESPRESSO.  “FFTs are notoriously difficult to parallelize efficiently across nodes; as a result, we are exploring strategies for distinguishing between intra-node parallelization of the FFTs using OpenMP and inter-node parallelization of other portions of the calculation using MPI. Our expectation is that these changes will be especially important on Intel Xeon Phi architectures,” indicates Barnes. How HPC will aid quantum materials research in the future Cineca, LBNL and NERSC all have a vision of how improved HPC code and Intel processors and coprocessors can improve the future of quantum materials research. The work these groups are doing to modify code to take advantage of HPC parallelization and optimization is especially important because there are not enough software engineers to adapt legacy codes. The work they are doing is being reviewed, and the optimization and parallelization modifications made by Cineca have been approved and incorporated into Release 5.3.0 of the Quantum ESPRESSO code. Both the LBNL and NERSC teams are active in the Intel Xeon Phi User's Group (IXPUG) and in the exchange of information and ideas to enhance the usability and efficiency of scientific applications running on large Intel Xeon Phi coprocessor-based high performance computing (HPC) systems. NERSC will be getting a large next-generation Intel Xeon Phi-processor-based supercomputer known as Cori late in 2016. NERSC has launched the NERSC Exascale Science Applications Program, which will allow 20 projects to collaborate with NERSC, Cray and Intel by providing access to early hardware, special training and preparation sessions. Project teams, guided by NERSC, Cray and Intel, will undertake intensive efforts to adapt software to take advantage of Cori's manycore architecture and to use the resultant codes to produce path-breaking science on an architecture that may represent an approach to exascale systems. Cavazzoni states, “In the context of the MaX project, we are committed to work on different codes from the community of material science in order to get ready for the exascale challenges. One of our main targets is to contribute to the modularization of such codes in order to build domain-specific libraries to be usable in different codes and/or complex workflows as LEGO blocks. This high degree of modularization will also allow our team to increase the performances and the suitability for new incoming architectures. In QE, we are already performing this work, and we recently packed all the functionalities related to the FFT kernels in a specific library. We are doing similar work for the linear algebra (such as diagonalization and eigenvalue problems) kernels. Together with MaX, we are also exploring new parallel paradigms and their possible usage in QE. In particular, we are interested in the tasking strategies implemented in the OpenMP standard. The advent of the Intel Xeon Phi architecture platforms gave us a strong motivation to increase the level of exposed parallelism in QE. Working on this aspect brings us much closer to the exascale scalability. The Intel Xeon Phi architecture clearly tells us that what will make the difference is the ability to use the shared memory paradigm and node resources best. We need to allow the allocation of a single MPI task per socket, where the best ratio today for MPI/threads is 1/2, 1/4, quite unlikely 1/8, and nothing above. We should improve the shared memory efficiency to have the possibility to use MPI to threads ratio in the order of 1 to 32 at least. And this will be valuable for any architecture, not only for the Intel Xeon Phi processor. All these enhancements will be soon tested on the upcoming Intel Xeon Phi processors that will be available this year in new supercomputers.” Other articles in this series covering the modernization of popular chemistry codes include: Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR. R&D 100 AWARD ENTRIES NOW OPEN: Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! . Pascual J.M.,University of Leon | Prieto R.,University of San Carlos | Carrasco R.,Ramony Cajal University Hospital | Barrios L.,Computing Center Journal of Neurosurgery | Year: 2013 Object. Accurate diagnosis of the topographical relationships of craniopharyngiomas (CPs) involving the third ventricle and/or hypothalamus remains a challenging issue that critically influences the prediction of risks associated with their radical surgical removal. This study evaluates the diagnostic accuracy of MRI to define the precise topographical relationships between intraventricular CPs, the third ventricle, and the hypothalamus. Methods. An extensive retrospective review of well-described CPs reported in the MRI era between 1990 and 2009 yielded 875 lesions largely or wholly involving the third ventricle. Craniopharyngiomas with midsagittal and coronal preoperative and postoperative MRI studies, in addition to detailed descriptions of clinical and surgical findings, were selected from this database (n = 130). The position of the CP and the morphological distortions caused by the tumor on the sella turcica, suprasellar cistern, optic chiasm, pituitary stalk, and third ventricle floor, including the infundibulum, tuber cinereum, and mammillary bodies (MBs), were analyzed on both preoperative and postoperative MRI studies. These changes were correlated with the definitive CP topography and type of third ventricle involvement by the lesion, as confirmed surgically. Results. The mammillary body angle (MBA) is the angle formed by the intersection of a plane tangential to the base of the MBs and a plane parallel to the floor of the fourth ventricle in midsagittal MRI studies. Measurement of the MBA represented a reliable neuroradiological sign that could be used to discriminate the type of intraventricular involvement by the CP in 83% of cases in this series (n = 109). An acute MBA (< 60°) was indicative of a primary tuberal-intraventricular topography, whereas an obtuse MBA (> 90°) denoted a primary suprasellar CP position, causing either an invagination of the third ventricle (pseudointraventricular lesion) or its invasion (secondarily intraventricular lesion; p < 0.01). A multivariate model including a combination of 5 variables (the MBA, position of the hypothalamus, presence of hydrocephalus, psychiatric symptoms, and patient age) allowed an accurate definition of the CP topography preoperatively in 74%-90% of lesions, depending on the specific type of relationship between the tumor and third ventricle. Conclusions. The type of mammillary body displacement caused by CPs represents a valuable clue for ascertaining the topographical relationships between these lesions and the third ventricle on preoperative MRI studies. The MBA provides a useful sign to preoperatively differentiate a primary intraventricular CP originating at the infundibulotuberal area from a primary suprasellar CP, which either invaginated or secondarily invaded the third ventricle. © AANS, 2013. Source News Article Today’s installment is the third in a series covering how researchers from national laboratories and scientific research centers are updating popular molecular dynamics, quantum chemistry and quantum materials code to take advantage of hardware advances, such as the next-generation Intel Xeon Phi processors. Georgia Institute of Technology, known as Georgia Tech, is an Intel Parallel Computing Center (Intel PCC) that focuses on modernizing the performance and functionality of software on advanced HPC systems used in scientific discovery. Georgia Tech developed a new HPC software package, called GTFock, and the SIMINT library to make quantum chemistry and materials simulations run faster on servers and supercomputers using Intel Xeon processors and Intel Xeon Phi coprocessors. These tools, which continue to be improved, provide an increase in processing speed over the best state-of-the-art quantum chemistry codes in existence. “GTFock and SIMINT allow us to perform quantum chemistry simulations faster and with less expense, which can help in solving large-scale problems from fundamental chemistry and biochemistry to pharmaceutical and materials design,” states Edmond Chow, Associate Professor of Computational Science and Engineering and Director of the Georgia Institute of Technology Intel PCC. The Intel PCC at Georgia Tech has been simulating the binding of the drug Indinavir with human immunodeficiency virus (HIV) II protease. Indinavir is a protease inhibitor that competitively binds to the active site of HIV II protease to disrupt normal function as part of HIV treatment therapy. Such systems are too large to study quantum mechanically, so only a part of the protease closest to the drug is typically simulated. The aim of the work at Georgia Tech is to quantify the discrepancy in the binding energy when such truncated models of the protease are used. To do this, simulations with increasing larger portions of the protease are performed. These are enabled by the GTFock code, developed at the Georgia Tech Intel PCC in collaboration with Intel, which has been designed to scale efficiently on large cluster computers, including Intel Many Integrated Core (MIC) architecture clusters. Calculations were performed at the Hartree-Fock level of theory. The largest simulations included residues of the protease more than 18 Angstroms away from the drug molecule. These simulations involved almost 3000 atoms and were performed on more than 1.6 million compute cores of the Tianhe-2 supercomputer (an Intel Xeon processor and Intel Xeon Phi processor-based system that is currently number one on the TOP500 list). The results of this work so far show variations in binding energy that persist throughout the range up to 18 Angstroms. This suggests that at even relatively large cutoff distances, leading to very large model complexes (much larger than are typically possible with conventional codes and computing resources), the binding energy is not converged to within chemical accuracy. Further work is planned to validate these results as well as to study additional protein-ligand systems. New quantum chemistry code: GTFock The GTFock code was developed by the Georgia Tech Intel PCC in conjunction with the Intel Parallel Computing Lab. GTFock addresses one of the main challenges of quantum chemistry, which is the ability to run more accurate simulations and simulations of larger molecules through exploiting distributed memory processing. GTFock was designed as a new toolkit with optimized and scalable code for Hartree-Fock self-consistent field iterations and the distributed computation of the Fock matrix in quantum chemistry. The Hartree-Fock (HF) method is the one of most fundamental methods in quantum chemistry for approximately solving the electronic Schrödinger equation. The solution of the equation, called the wavefunction, can be used to determine properties of the molecule. Georgia Tech’s goals in the code design of GTFock include scalability to large numbers of nodes and the capability to simultaneously use CPUs and Intel Xeon Phi coprocessors. GTFock also includes infrastructure for performing self-consistent field (SCF) iterations to solve for the Hartree-Fock approximation and uses a new distributed algorithm for load balancing and reducing communication. GTFock code can be integrated into existing quantum chemistry packages and can be used for experimentation as a benchmark for high-performance computing. The code is capable of separately computing the Coulomb and exchange matrices and, thus, can be used as a core routine in many quantum chemistry methods. As part of IPCC collaborations, Georgia Tech graduate student Xing Liu and Intel researcher Sanchit Misra spent a month in China optimizing and running GTFock on Tianhe-2. During testing, the team encountered scalability problems when scaling up the code to 8100 nodes on Tianhe-2. They resolved these issues by using a better static partitioning and a better work stealing algorithm than used in previous work. They utilized the Intel Xeon Phi coprocessors on Tianhe-2 by using a dedicated thread on each node to manage offload to coprocessors and to use work stealing to dynamically balance the work between CPUs and coprocessors. The electron repulsion integral (ERI) calculations were also optimized for modern processors including the Intel Xeon Phi coprocessor. The partitioning framework used in GTFock is useful for comparing existing and future partitioning techniques. The best partitioning scheme may depend on the size of the problem, the computing system used and the parallelism available. In Fock matrix construction, each thread sums to its own copy of Fock submatrices in order to avoid contention for a single copy of the Fock matrix on a node. However, accelerators including Intel Xeon Phi coprocessors have limited memory per core, making this strategy impossible for reduction across many threads. Thus, novel solutions had to be designed. Figure 2 shows speed up results from running the GTFock code. A deficiency in quantum chemistry codes that Georgia Tech saw had to be addressed is the bottleneck of computing quantities called electron repulsion integrals. This calculation is a very computationally intensive step: there are many of these integrals to calculate and these calculations do not run efficiently on modern processors, including the Intel Xeon processor. One of the reasons is that the existing codes do not take advantage of single instruction, multiple data (SIMD) processing that is available on these processors. It is difficult for algorithms to exploit SIMD operations because of the structure of the algorithms. The existing algorithms that are used are recursive in multiple dimensions and require substantial amounts of intermediate data. In general, it is difficult to vectorize these calculations. Many attempts in the past involved taking existing libraries and rearranging code elements to try to optimize and speed up the calculations. The Georgia Tech team felt it was necessary to create a new library for electron integral calculations from scratch. The library they created is called SIMINT, which means Single Instruction Multiple Integral (named by SIMINT library developer Ben Pritchard). This library applies SIMD instructions to compute multiple integrals at the same time, which is the efficient mode of operation of Intel Xeon processors as well as the Intel Xeon Phi microarchitecture (MIC), which has wide SIMD units. SIMINT is a library for calculating electron repulsion integrals. The Georgia Tech PCC team designed it to use the SIMD features of Intel Xeon processors — it is highly efficient and faster than other state-of-the-art ERI codes. The approach is to use horizontal vectorization; thus, you must compute batches of integrals of the same type together. The Georgia Tech team has posted information so that users can take a look. The team uses Intel VTune amplifier extensively in optimizing SIMINT, because it helps tune the vectorization and cache performance. Developers know how fast the processor can go and the speed limits of the calculation because of the instructions they need to perform. Intel VTune amplifier provides a variety of statistics at a line of code level that help determine why they may not be reaching the expected performance. Figure 3 shows an approximate 2x speedup over libint with a test case that has many worst-case configurations. Figure 4 shows a 3x speedup for another basis set without worst-case configurations. “SIMINT has been designed specifically to efficiently use SIMD features of Intel processors and co-processors. As a result, we’re already seeing speedups of 2x to 3x over the best existing codes.” Edmond Chow, Associate Professor of Computational Science and Engineering and Director of the Georgia Institute of Technology Intel PCC. “GTFock has attracted the attention of other developers of quantum chemistry packages. We have already integrated GTFock into PSI4 to provide distributed memory parallel capabilities to that package. In addition, we have exchanged visits with the developers of the NWChem package to initiate integration of GTFock into NWChem (joint work with Edo Apra and Karol Kowalski, PNNL). Along with SIMINT, we hope to help quantum chemists get their simulations — and their science — done faster,” states Chow. Other articles in this series covering the modernization of popular chemistry codes include: Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR. R&D 100 AWARD ENTRIES NOW OPEN: Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! . News Article | August 15, 2016 Quantum computing remains mysterious and elusive to many, but USC Viterbi School of Engineering researchers might have taken us one step closer to bring the superpowered devices to practical reality. The Information Sciences Institute at USC Viterbi is home to the USC-Lockheed Martin Quantum Computing Center (QCC), a supercooled, magnetically shielded facility specially built to house the first commercially available quantum optimization processors – devices so advanced that there are currently only two in use outside the Canadian company D-Wave Systems, where they were built: The first one went to USC and Lockheed Martin, the second to NASA and Google. Quantum computers encode data in quantum bits, or “qubits,” which have the capability of representing the two digits of one and zero at the same time – as opposed to traditional bits, which can encode distinctly either a one or a zero. This property, called superposition, along with the ability of quantum states to “interfere” (cancel or reinforce each other like waves in a pond) and “tunnel” through energy barriers, is what may one day allow quantum processors to ultimately perform optimization calculations much faster than is possible using traditional processors. Optimization problems can take many forms, and quantum processors have been theorized to be useful for a variety of machine learning and big data problems like stock portfolio optimization, image recognition and classification, and detecting anomalies. Yet, because of the exotic way in which quantum computers process information, they are highly sensitive to errors of different kinds. When such errors occur they can erase any quantum computational advantage — so developing methods to overcome errors is of paramount importance in the quest to demonstrate “quantum supremacy.” USC researchers Walter Vinci, Tameem Albash and Daniel Lidar put forth a scheme to minimize errors. Their solution, explained in the article “Nested Quantum Annealing Correction” published in the journal Nature Quantum Information, is focused on reducing and correcting errors associated with heating, a type of errors that is common and particularly detrimental in quantum optimizers. Cooling the quantum processor further is not possible since the specialized dilution refrigerator that keeps it cool already operates at its limit, at a temperature approximately 1,000 times colder than outer space. Vinci, Albash and Lidar have developed a new method to suppress heating errors: By coupling several qubits together on a D-Wave Two quantum optimizer, without changing the hardware of the device, these qubits act effectively as one qubit that experiences a lower temperature. The more qubits are coupled, the lower is the temperature experienced, allowing researchers to minimize the effect of heating as a source of noise or error. This nesting scheme is implementable not only on platforms such as the D-Wave processor on which it was tested, but also on other future quantum optimization devices with different hardware architectures. The researchers believe that this work is an important step in eliminating a bottleneck for scalable quantum optimization implementations. “Our work is part of a large scale effort by the research community aimed at realizing the potential of quantum information processing, which we all hope might one day surpass its classical counterparts,” said Lidar, a USC Viterbi professor and QCC scientific director. Discover hidden collaborations
5a6324856126009a
Singularities and Black Holes First published Mon Jun 29, 2009 A spacetime singularity is a breakdown in the geometrical structure of space and time. It is a topic of ongoing physical and philosophical research to clarify both the nature and significance of such pathologies. Because it is the fundamental geometry that is breaking down, spacetime singularities are often viewed as an end, or “edge,” of spacetime itself. However, numerous difficulties arise when one tries to make this notion more precise. Our current theory of spacetime, general relativity, not only allows for singularities, but tells us that they are unavoidable in some real-life circumstances. Thus we apparently need to understand the ontology of singularities if we are to grasp the nature of space and time in the actual universe. The possibility of singularities also carries potentially important implications for the issues of physical determinism and the scope of physical laws. Black holes are regions of spacetime from which nothing, not even light, can escape. A typical black hole is the result of the gravitational force becoming so strong that one would have to travel faster than light to escape its pull. Such black holes contain a spacetime singularity at their center; thus we cannot fully understand a black hole without also understanding the nature of singularities. However, black holes raise several additional conceptual issues. As purely gravitational entities, black holes are at the heart of many attempts to formulate a theory of quantum gravity. Although they are regions of spacetime, black holes are also thermodynamical entities, with a temperature and an entropy; however, it is far from clear what statistical physics underlies these thermodynamical facts. The evolution of black holes is also apparently in conflict with standard quantum evolution, for such evolution rules out the sort of increase in entropy that seems to be required when black holes are present. This has led to a debate over what fundamental physical principles are likely to be preserved in, or violated by, a full quantum theory of gravity. 1. Spacetime Singularities General relativity, Einstein's theory of space, time, and gravity, allows for the existence of singularities. On this nearly all agree. However, when it comes to the question of how, precisely, singularities are to be defined, there is widespread disagreement Singularties in some way signal a breakdown of the geometry itself, but this presents an obvious difficulty in referring to a singulary as a “thing” that resides at some location in spacetime: without a well-behaved geomtry, there can be no “location.” For this reason, some philosopers and physicists have suggested that we should not speak of “singularities” at all, but rather of “singular spacetimes.” In this entry, we shall generally treat these two formulations as being equivalent, but we will highlight the distinction when it becomes significant. Singularities are often conceived of metaphorically as akin to a tear in the fabric of spacetime. The most common attempts to define singularities center on one of two core ideas that this image readily suggests. tear in spacetime The first is that a spacetime has a singularity just in case it contains an incomplete path, one that cannot be continued indefinitely, but draws up short, as it were, with no possibility of extension. (“Where is the path supposed to go after it runs into the tear? Where did it come from when it emerged from the tear?”). The second is that a spacetime is singular just in case there are points “missing from it.” (“Where are the spacetime points that used to be or should be where the tear is?”) Another common thought, often adverted to in discussion of the two primary notions, is that singular structure, whether in the form of missing points or incomplete paths, must be related to pathological behavior of some sort on the part of the singular spacetime's curvature, that is, the fundamental deformation of spacetime that manifests itself as “the gravitational field.” For example, some measure of the intensity of the curvature (“the strength of the gravitational field”) may increase without bound as one traverses the incomplete path. Each of these three ideas will be considered in turn below. There is likewise considerable disagreement over the significance of singularties. Many eminent physicists believe that general relativity's prediction of singular structure signals a serious deficiency in the theory; singularities are an indication that the description offered by general relativity is breaking down. Others believe that singularities represent an exciting new horizon for physicists to aim for and explore in cosmology, holding out the promise of physical phenomena differing so radically from any that we have yet experienced as to ensure, in our attempt to observe, quantify and understand them, a profound advance in our comprehension of the physical world. 1.1 Path Incompleteness While there are competing definitions of spacetime singularities, the most central, and widely accepted, criterion rests on the possibility that some spacetimes contain incomplete paths. Indeed, the rival definitions (in terms of missing points or curvature pathology) still make use of the notion of path incompleteness. (The reader unfamiliar with general relativity may find it helpful to review the Hole Argument entry's Beginner's Guide to Modern Spacetime Theories, which presents a brief and accessible introduction to the concepts of a spacetime manifold, a metric, and a worldline.) A path in spacetime is a continuous chain of events through space and time. If I snap my fingers continually, without pause, then the collection of snaps forms a path. The paths used in the most important singularity theorems represent possible trajectories of particles and observers. Such paths are known as “world-lines”; they consist of the events occupied by an object throughout its lifetime. That the paths be incomplete and inextendible means, roughly speaking, that, after a finite amount of time, a particle or observer following that path would “run out of world,” as it were—it would hurtle into the tear in the fabric of spacetime and vanish. Alternatively, a particle or observer could leap out of the tear to follow such a path. While there is no logical or physical contradiction in any of this, it appears on the face of it physically suspect for an observer or a particle to be allowed to pop in or out of existence right in the middle of spacetime, so to speak—if that does not suffice for concluding that the spacetime is “singular,” it is difficult to imagine what else would. At the same time, the ground-breaking work predicting the existence of such pathological paths produced no consensus on what ought to count as a necessary condition for singular structure according to this criterion, and thus no consensus on a fixed definition for it. In this context, an incomplete path in spacetime is one that is both inextendible and of finite proper length, which means that any particle or observer traversing the path would experience only a finite interval of existence that in principle cannot be continued any longer. However, for this criterion to do the work we want it to, we'll need to limit the class of spacetimes under discussion. Specifically, we shall be concerned with spacetimes that are maximally extended (or just maximal). In effect, this condition says that one's representation of spacetime is “as big as it possibly can be”—there is, from the mathematical point of view, no way to treat the spacetime as being a proper subset of a larger, more extensive spacetime. non-maximal spacetime If there is an incomplete path in a spacetime, goes the thinking behind the requirement, then perhaps the path is incomplete only because one has not made one's model of spacetime big enough. If one were to extend the spacetime manifold maximally, then perhaps the previously incomplete path could be extended into the new portions of the larger spacetime, indicating that no physical pathology underlay the incompleteness of the path. The inadequacy would merely reside in the incomplete physical model we had been using to represent spacetime. An example of a non-maximally extended spacetime can be easily had, along with a sense of why they intuitively seem in some way or other deficient. For the moment, imagine spacetime is only two-dimensional, and flat. Now, excise from somewhere on the plane a closed set shaped like Ingrid Bergman. Any path that had passed through one of the points in the removed set is now incomplete. non-maximal spacetime made maximal by filling its holes In this case, the maximal extension of the resulting spacetime is obvious, and does indeed fix the problem of all such incomplete paths: re-incorporate the previously excised set. The seemingly artificial and contrived nature of such examples, along with the ease of rectifying them, seems to militate in favor of requiring spacetimes to be maximal. Once we've established that we're interested in maximal spacetimes, the next issue is what sort of path incompleteness is relevant for singularities. Here we find a good deal of controversy. Criteria of incompleteness typically look at how some parameter naturally associated with the path (such as its proper length) grows. One generally also places further restrictions on the paths that are worth considering (for example, one rules out paths that could only be taken by particles undergoing unbounded acceleration in a finite period of time). A spacetime is said to be singular if it possesses a path such that the specified parameter associated with that path cannot increase without bound as one traverses the entirety of the maximally extended path. The idea is that the parameter at issue will serve as a marker for something like the time experienced by a particle or observer, and so, if the value of that parameter remains finite along the whole path then we've run out of path in a finite amout of time, as it were. We've hit and “edge” or a “tear” in spacetime. For a path that is everywhere timelike (i.e., that does not involves speeds at or above that of light), it is natural to take as the parameter the proper time a particle or observer would experience along the path, that is, the time measured along the path by a natural clock, such as one based on the natural vibrational frequency of an atom. (There are also fairly natural choices that one can make for spacelike paths (i.e., those that consist of points at a single “time”) and null paths (those followed by light signals). However, because the spacelike and null cases add yet another level of difficulty, we shall not discuss them here.) The physical interpretation of this sort of incompleteness for timelike paths is more or less straightforward: a timelike path incomplete with respect to proper time in the future direction would represent the possible trajectory of a massive body that would, say, never age beyond a certain point in its existence (an analogous statement can be made, mutatis mutandis, if the path were incomplete in the past direction). We cannot, however, simply stipulate that a maximal spacetime is singular just in case it contains paths of finite proper length that cannot be extended. Such a criterion would imply that even the flat spacetime described by special relativity is singular, which is surely unacceptable. This would follow because, even in flat spacetime, there are timelike paths with unbounded acceleration which have only a finite proper length (proper time, in this case) and are also inextendible. The most obvious option is to define a spacetime as singular if and only if it contains incomplete, inextendible timelike geodesics, i.e., paths representing the trajectories of inertial observers, those in free-fall experiencing no acceleration “other than that due to gravity.” However, this criterion seems too permissive, in that it would count as non-singular some spacetimes whose geometry seems quite pathological. For example, Geroch (1968) demonstrates that a spacetime can be geodesically complete and yet possess an incomplete timelike path of bounded total acceleration—that is to say, an inextendible path in spacetime traversable by a rocket with a finite amount of fuel, along which an observer could experience only a finite amount of proper time. Surely the intrepid astronaut in such a rocket, who would never age beyond a certain point but who also would never necessarily die or cease to exist, would have just cause to complain that something was singular about this spacetime. We therefore want a definition that is not restricted to geodesics when deciding whether a spacetime is singular. However, we need some way of overcoming the fact that non-singular spacetimes include inextendible paths of finite proper length. The most widely accepted solution to this problem makes use of a slightly different (and slightly technical) notion of length, known as “generalized affine length.”[1] Unlike proper length, this generalized affine length depends on some arbitrary choices (roughly speaking, the length will vary depending on the coordinates one chooses). However, if the length is infinite for one such choice, it will be infinite for all other choices. Thus the question of whether a path has a finite or infinite generalized affine length is a perfectly well-defined question, and that is all we'll need. The definition that has won the most widespread acceptance — leading Earman (1995, p. 36) to dub this the semiofficial definition of singularities — is the following: A maximal spacetime is singular if and only if it contains an inextendible path of finite generalized affine length. To say that a spacetime is singular then is to say that there is at least one maximally extended path that has a bounded (generalized affine) length. To put it another way, a spacetime is nonsingular when it is complete in the sense that the only reason any given path might not be extendible is that it's already infinitely long (in this technical sense). The chief problem facing this definition of singularities is that the physical significance of generalized affine length is opaque, and thus it is unclear what the relevance of singularities, defined in this way, might be. It does nothing, for example, to clarify the physical status of the spacetime described by Geroch; it seems as though the new criterion does nothing more than sweep the troubling aspects of such examples under the rug. It does not explain why we ought not take such prima facie puzzling and troubling examples as physically pathological; it merely declares by fiat that they are not. So where does this leave us? The consensus seems to be that, while it is easy in specific examples to conclude that incomplete paths of various sorts represent singular structure, no entirely satisfactory, strict definition of singular structure in their terms has yet been formulated. For a philosopher, the issues offer deep and rich veins for those contemplating, among other matters, the role of explanatory power in the determination of the adequacy of physical theories, the role of metaphysics and intuition, questions about the nature of the existence attributable to physical entities in spacetime and to spacetime itself, and the status of mathematical models of physical systems in the determination of our understanding of those systems as opposed to in the mere representation our knowledge of them. 1.2 Boundary Constructions We have seen that one runs into difficulties if one tries to define singularities as “things” that have “locations,” and how some of those difficulties can be avoided by defining singular spacetimes in terms of incomplete paths. However, it would be desirable for many reasons to have a characterization of a spacetime singularity in general relativity as, in some sense or other, a spatiotemporal “place.” If one had a precise characterization of a singularity in terms of points that are missing from spacetime, one might then be able to analyze the structure of the spacetime “locally at the singularity,” instead of taking troublesome, perhaps ill-defined limits along incomplete paths. Many discussions of singular structure in relativistic spacetimes, therefore, are premised on the idea that a singularity represents a point or set of points that in some sense or other is “missing” from the spacetime manifold, that spacetime has a “hole” or “tear” in it that we could fill in or patch by the appendage of a boundary to it. In trying to determine whether an ordinary web of cloth has a hole in it, for example, one would naturally rely on the fact that the web exists in space and time. In this case one can, so to speak, point to a hole in the cloth by specifying points of space at a particular moment of time not currently occupied by any of the cloth but which would, as it were, complete the cloth were they so occupied. When trying to conceive of a singular spacetime, however, one does not have the luxury of imagining it embedded in a larger space with respect to which one can say there are points missing from it. In any event, the demand that the spacetime be maximal rules out the possibility of embedding the spacetime manifold in any larger spacetime manifold of any ordinary sort. It would seem, then, that making precise the idea that a singularity is a marker of missing points ought to devolve upon some idea of intrinsic structural incompleteness in the spacetime manifold rather than extrinsic incompleteness with respect to an external structure. Force of analogy suggests that one define a spacetime to have points missing from it if and only if it contains incomplete, inextendible paths, and then try to use these incomplete paths to construct in some fashion or other new, properly situated points for the spacetime, the addition of which will make the previously inextendible paths extendible. These constructed points would then be our candidate singularities. Missing points on this view would correspond to a boundary for a singular spacetime—actual points of an extended spacetime at which paths incomplete in the original spacetime would terminate. (We will, therefore, alternate between speaking of missing points and speaking of boundary points, with no difference of sense intended.) The goal then is to construct this extended space using the incomplete paths as one's guide. Now, in trivial examples of spacetimes with missing points such as the one offered before, flat spacetime with a closed set in the shape of Ingrid Bergman excised from it, one does not need any technical machinery to add the missing points back in. One can do it by hand, as it were. Many spacetimes with incomplete paths, however, do not allow “missing points” to be attached in any obvious way by hand, as this example does. For this program to be viable, which is to say, in order to give substance to the idea that there really are points that in some sense ought to have been included in the spacetime in the first place, we require a physically natural completion procedure based on the incomplete paths that can be applied to incomplete paths in arbitrary spacetimes. Several problems with this program make themselves felt immediately. Consider, for example, an instance of spacetime representing the final state of the complete gravitational collapse of a spherically symmetric body resulting in a black hole. (See §3 below for a description of black holes.) In this spacetime, any timelike path entering the black hole will necessarily be extendible for only a finite amount of proper time—it then “runs into the singularity” at the center of the black hole. In its usual presentation, however, there are no obvious points missing from the spacetime at all. It is, to all appearances, as complete as the Cartesian plane, excepting only for the existence of incomplete curves, no class of which indicates by itself a place in the manifold to add a point to it to make the paths in the class complete. Likewise, in our own spacetime every inextendible, past-directed timelike path is incomplete (and our spacetime is singular): they all “run into the Big Bang.” Insofar as there is no moment of time at which the Big Bang occurred (there is no moment of time at which time began, so to speak), there is no point to serve as the past endpoint of such a path. The reaction to the problems faced by these boundary constructions is varied, to say the least, ranging from blithe acceptance of the pathology (Clarke 1993), to the attitude that there is no satisfying boundary construction currently available without ruling out the possibility of better ones in the future (Wald 1984), to not even mentioning the possibility of boundary constructions when discussing singular structure (Joshi 1993), to rejection of the need for such constructions at all (Geroch, Can-bin and Wald, 1982). Nonetheless, many eminent physicists seem convinced that general relativity stands in need of such a construction, and have exerted extraordinary efforts in the service of trying to devise such constructions. This fact raises several fascinating philosophical problems. Though physicists offer as strong motivation the possibility of gaining the ability to analyze singular phenomena locally in a mathematically well-defined manner, they more often speak in terms that strongly suggest they suffer a metaphysical, even an ontological, itch that can be scratched only by the sharp point of a localizable, spatiotemporal entity serving as the locus of their theorizing. However, even were such a construction forthcoming, what sort of physical and theoretical status could accrue to these missing points? They would not be idealizations of a physical system in any ordinary sense of the term, insofar as they would not represent a simplified model of a system formed by ignoring various of its physical features, as, for example, one may idealize the modeling of a fluid by ignoring its viscosity. Neither would they seem necessarily to be only convenient mathematical fictions, as, for example, are the physically impossible dynamical evolutions of a system one integrates over in the variational derivation of the Euler-Lagrange equations, for, as we have remarked, many physicists and philosophers seem eager to find such a construction for the purpose of bestowing substantive and clear ontic status on singular structure. What sorts of theoretical entities, then, could they be, and how could they serve in physical theory? While the point of this project may seem at bottom identical to the path incompleteness account discussed in §1.1, insofar as singular structure will be defined by the presence of incomplete, inextendible paths, there is a crucial semantic and logical difference between the two. Here, the existence of the incomplete path is not taken itself to constitute the singular structure, but rather serves only as a marker for the presence of singular structure in the sense of missing points: the incomplete path is incomplete because it “runs into a hole” in the spacetime that, were it filled, would allow the path to be continued; this hole is the singular structure, and the points constructed to fill it compose its locus. Currently, however, there seems to be even less consensus on how (and whether) one should define singular structure in terms of missing points than there is regarding definitions in terms of path incompleteness. Moreover, this project also faces even more technical and philosophical problems. For these reasons, path incompleteness is generally considered the default definition of singularities. 1.3 Curvature Pathology While path incompleteness seems to capture an important aspect of the intuitive picture of singular structure, it completely ignores another seemingly integral aspect of it: curvature pathology. If there are incomplete paths in a spacetime, it seems that there should be a reason that the path cannot go farther. The most obvious candidate explanation of this sort is something going wrong with the dynamical structure of the spacetime, which is to say, with the curvature of the spacetime. This suggestion is bolstered by the fact that local measures of curvature do in fact blow up as one approaches the singularity of a standard black hole or the big bang singularity. However, there is one problem with this line of thought: no species of curvature pathology we know how to define is either necessary or sufficient for the existence of incomplete paths. (For a discussion of defining singularities in terms of curvature pathologies, see Curiel 1998.) To make the notion of curvature pathology more precise, we will use the manifestly physical idea of tidal force. Tidal force is generated by the differential in intensity of the gravitational field, so to speak, at neighboring points of spacetime. For example, when you stand, your head is farther from the center of the Earth than your feet, so it feels a (practically negligible) smaller pull downward than your feet. (For a diagram illustrating the nature of tidal forces, see Figure 9 of the entry on Inertial Frames.) Tidal forces are a physical manifestation of spacetime curvature, and one gets direct observational access to curvature by measuring these forces. For our purposes, it is important that in regions of extreme curvature, tidal forces can grow without bound. It is perhaps surprising that the state of motion of the observer as it traverses an incomplete path (e.g. whether the observer is accelerating or spinning) can be decisive in determining the physical response of an object to the curvature pathology. Whether the object is spinning on its axis or not, for example, or accelerating slightly in the direction of motion, may determine whether the object gets crushed to zero volume along such a path or whether it survives (roughly) intact all the way along it, as in examples offered by Ellis and Schmidt (1977). The effect of the observer's state of motion on his or her experience of tidal forces can be even more pronounced than this. There are examples of spacetimes in which an observer cruising along a certain kind of path would experience unbounded tidal forces and so be torn apart, while another observer, in a certain technical sense approaching the same limiting point as the first observer, accelerating and decelerating in just the proper way, would experience a perfectly well-behaved tidal force, though she would approach as near as one likes to the other fellow who is in the midst of being ripped to shreds.[2] Things can get stranger still. There are examples of incomplete geodesics contained entirely within a well-defined area of a spacetime, each having as its limiting point an honest-to-goodness point of spacetime, such that an observer freely falling along such a path would be torn apart by unbounded tidal forces; it can easily be arranged in such cases, however, that a separate observer, who actually travels through the limiting point, will experience perfectly well-behaved tidal forces.[3] Here we have an example of an observer being ripped apart by unbounded tidal forces right in the middle of spacetime, as it were, while other observers cruising peacefully by could reach out to touch him or her in solace during the final throes of agony. This example also provides a nice illustration of the inevitable difficulties attendant on attempts to localize singular structure. It would seem, then, that curvature pathology as standardly quantified is not in any physical sense a well-defined property of a region of spacetime simpliciter. When we consider the curvature of four-dimensional spacetime, the motion of the device that we use to probe a region (as well as the nature of the device) becomes crucially important for the question of whether pathological behavior manifests itself. This fact raises questions about the nature of quantitative measures of properties of entities in general relativity, and what ought to count as observable, in the sense of reflecting the underlying physical structure of spacetime. Because apparently pathological phenomena may occur or not depending on the types of measurements one is performing, it does not seem that this pathology reflects anything about the state of spacetime itself, or at least not in any localizable way. What then may it reflect, if anything? Much work remains to be done by both physicists and by philosophers in this area, the determination of the nature of physical quantities in general relativity and what ought to count as an observable with intrinsic physical significance. See Bergmann (1977), Bergmann and Komar (1962), Bertotti (1962), Coleman and Korté (1992), and Rovelli (1991, 2001, 2002a, 2002b) for discussion of many different topics in this area, approached from several different perspectives. 2. The Significance of Singularities When considering the implications of spacetime singularities, it is important to note that we have good reasons to believe that the spacetime of our universe is singular. In the late 1960s, Hawking, Penrose, and Geroch proved several singularity theorems, using the path-incompleteness definition of singularities (see, e.g., Hawking and Ellis 1973). These theorems showed that if certain reasonable premises were satisfied, then in certain circumstances singularities could not be avoided. Notable among these conditions was the “positive energy condition” that captures the idea that energy is never negative. These theorems indicate that our universe began with an initial singularity, the “Big Bang,” 13.7 billion years ago. They also indicate that in certain circumstances (discussed below) collapsing matter will form a black hole with a central singularity. Should these results lead us to believe that singularities are real? Many physicists and philosophers resist this conclusion. Some argue that singularities are too repugnant to be real. Others argue that the singular behavior at the center of black holes and at the beginning of time points to a the limit of the domain of applicability of general relativity. However, some are inclined to take general relativity at its word, and simply accept its prediction of singularities as a surprising, but perfectly consistent account of the geometry of our world. 2.1 Definitions and Existence of Singularities As we have seen, there is no commonly accepted, strict definition of singularity, no physically reasonable definition of missing point, and no necessary connection of singular structure, at least as characterized by the presence of incomplete paths, to the presence of curvature pathology. What conclusions should be drawn from this state of affairs? There seem to be two primary responses, that of Clarke (1993) and Earman (1995) on the one hand, and that of Geroch, Can-bin and Wald (1982), and Curiel (1998) on the other. The former holds that the mettle of physics and philosophy demands that we find a precise, rigorous and univocal definition of singularity. On this view, the host of philosophical and physical questions surrounding general relativity's prediction of singular structure would best be addressed with such a definition in hand, so as better to frame and answer these questions with precision in its terms, and thus perhaps find other, even better questions to pose and attempt to answer. The latter view is perhaps best summarized by a remark of Geroch, Can-bin and Wald (1982): “The purpose of a construction [of ‘singular points’], after all, is merely to clarify the discussion of various physical issues involving singular space-times: general relativity as it stands is fully viable with no precise notion of ‘singular points’.” On this view, the specific physics under investigation in any particular situation should dictate which definition of singularity to use in that situation, if, indeed, any at all. In sum, the question becomes the following: Is there a need for a single, blanket definition of singularity or does the urge for one bespeak only an old Platonic, essentialist prejudice? This question has obvious connections to the broader question of natural kinds in science. One sees debates similar to those canvassed above when one tries to find, for example, a strict definition of biological species. Clearly part of the motivation for searching for a single exceptionless definition is the impression that there is some real feature of the world (or at least of our spacetime models) which we can hope to capture precisely. Further, we might hope that our attempts to find a rigorous and exceptionless definition will help us to better understand the feature itself. Nonetheless, it is not entirely clear why we shouldn't be happy with a variety of types of singular structure, and with the permissive attitude that none should be considered the “right” definition of singularities. Even without an accepted, strict definition of singularity for relativistic spacetimes, the question can be posed of what it may mean to ascribe “existence” to singular structure under any of the available open possibilities. It is not farfetched to think that answers to this question may bear on the larger question of the existence of spacetime points in general. It would be difficult to argue that an incomplete path in a maximal relativistic spacetime does not exist in at least some sense of the term. It is not hard to convince oneself, however, that the incompleteness of the path does not exist at any particular point of the spacetime in the same way, say, as this glass of beer at this moment exists at this point of spacetime. If there were a point on the manifold where the incompleteness of the path could be localized, surely that would be the point at which the incomplete path terminated. But if there were such a point, then the path could be extended by having it pass through that point. It is perhaps this fact that lies behind much of the urgency surrounding the attempt to define singular structure as “missing points.” The demand that singular structure be localized at a particular place bespeaks an old Aristotelian substantivalism that invokes the maxim, “To exist is to exist in space and time” (Earman 1995, p. 28). Aristotelian substantivalism here refers to the idea contained in Aristotle's contention that everything that exists is a substance and that all substances can be qualified by the Aristotelian categories, two of which are location in time and location in space. One need not consider anything so outré as incomplete, inextendible paths, though, in order to produce examples of entities that seem undeniably to exist in some sense of the term or other, and yet which cannot have any even vaguely determined location in time and space predicated of them. Indeed, several essential features of a relativistic spacetime, singular or not, cannot be localized in the way that an Aristotelian substantivalist would demand. For example, the Euclidean (or non-Euclidean ) nature of a space is not something with a precise location. Likewise, various spacetime geometrical structures (such as the metric, the affine structure, etc.) cannot be localized in the way that the Aristotelian would demand. The existential status of such entities vis-à-vis more traditionally considered objects is an open and largely ignored issue. Because of the way the issue of singular structure in relativistic spacetimes ramifies into almost every major open question in relativistic physics today, both physical and philosophical, it provides a peculiarly rich and attractive focus for these sorts of questions. 2.2 The Breakdown of General Relativity? At the heart of all of our conceptions of a spacetime singularity is the notion of some sort of failing: a path that disappears, points that are torn out, spacetime curvature that becomes pathological. However, perhaps the failing lies not in the spacetime of the actual world (or of any physically possible world), but rather in the theoretical description of the spacetime. That is, perhaps we shouldn't think that general relativity is accurately describing the world when it posits singular structure. Indeed, in most scientific arenas, singular behavior is viewed as an indication that the theory being used is deficient. It is therefore common to claim that general relativity, in predicting that spacetime is singular, is predicting its own demise, and that classical descriptions of space and time break down at black hole singularities and at the Big Bang. Such a view seems to deny that singularities are real features of the actual world, and to assert that they are instead merely artifices of our current (flawed) physical theories. A more fundamental theory — presumably a full theory of quantum gravity — will be free of such singular behavior. For example, Ashtekar and Bojowald (2006) and Ashtekar, Pawlowski and Singh (2006) argue that, in the context of loop quantum gravity, neither the big bang singularity nor black hole singularities appear. On this reading, many of the earlier worries about the status of singularities become moot. Singularties don't exist, nor is the question of how to define them, as such, particularly urgent. Instead, the pressing question is what indicates the borders of the domain of applicability of general relativity? We pick up this question below in Section 5 on quantum black holes, for it is in this context that many of the explicit debates play out over the limits of general relativity. 3. Black Holes The simplest picture of a black hole is that of a body whose gravity is so strong that nothing, not even light, can escape from it. Bodies of this type are already possible in the familiar Newtonian theory of gravity. The “escape velocity” of a body is the velocity at which an object would have to travel to escape the gravitational pull of the body and continue flying out to infinity. Because the escape velocity is measured from the surface of an object, it becomes higher if a body contracts down and becomes more dense. (Under such contraction, the mass of the body remains the same, but its surface gets closer to its center of mass; thus the gravitational force at the surface increases.) If the object were to become sufficiently dense, the escape velocity could therefore exceed the speed of light, and light itself would be unable to escape. This much of the argument makes no appeal to relativistic physics, and the possibility of such classical black holes was noted in the late 18th Century by Michel (1784) and Laplace (1796). These Newtonian black holes do not precipitate quite the same sense of crisis as do relativistic black holes. While light hurled ballistically from the surface of the collapsed body cannot escape, a rocket with powerful motors firing could still gently pull itself free. Taking relativistic considerations into account, however, we find that black holes are far more exotic entities. Given the usual understanding that relativity theory rules out any physical process going faster than light, we conclude that not only is light unable to escape from such a body: nothing would be able to escape this gravitational force. That includes the powerful rocket that could escape a Newtonian black hole. Further, once the body has collapsed down to the point where its escape velocity is the speed of light, no physical force whatsoever could prevent the body from continuing to collapse down further – for this would be equivalent to accelerating something to speeds beyond that of light. Thus once this critical amount of collapse is reached, the body will get smaller and smaller, more and more dense, without limit. It has formed a relativistic black hole; at its center lies a spacetime singularity. For any given body, this critical stage of unavoidable collapse occurs when the object has collapsed to within its so-called Schwarzschild radius, which is proportional to the mass of the body. Our sun has a Schwarzschild radius of approximately three kilometers; the Earth's Schwarzschild radius is a little less than a centimeter. This means that if you could collapse all the Earth's matter down to a sphere the size of a pea, it would form a black hole. It is worth noting, however, that one does not need an extremely high density of matter to form a black hole if one has enough mass. Thus for example, if one has a couple hundred million solar masses of water at its standard density, it will be contained within its Schwarzschild radius and will form a black hole. Some supermassive black holes at the centers of galaxies are thought to be even more massive than this, at several billion solar masses. The “event horizon” of a black hole is the point of no return. That is, it comprises the last events in the spacetime around the singularity at which a light signal can still escape to the external universe. For a standard (uncharged, non-rotating) black hole, the event horizon lies at the Schwarzschild radius. A flash of light that originates at an event inside the black hole will not be able to escape, but will instead end up in the central singularity of the black hole. A light flash originating at an event outside of the event horizon will escape, but it will be red-shifted strongly to the extent that it is near the horizon. An outgoing beam of light that originates at an event on the event horizon itself, by definition, remains on the event horizon until the temporal end of the universe. General relativity tells us that clocks running at different locations in a gravitational field will generally not agree with one another. In the case of a black hole, this manifests itself in the following way. Imagine someone falls into a black hole, and, while falling, she flashes a light signal to us every time her watch hand ticks. Observing from a safe distance outside the black hole, we would find the times between the arrival of successive light signals to grow larger without limit. That is, it would appear to us that time were slowing down for the falling person as she approached the event horizon. The ticking of her watch (and every other process as well) would seem to go slower and slower as she got closer and closer to the event horizon. We would never actually see the light signals she emits when she crosses the event horizon; instead, she would seem to be eternally “frozen” just above the horizon. (This talk of “seeing” the person is somewhat misleading, because the light coming from the person would rapidly become severely red-shifted, and soon would not be practically detectable.) From the perspective of the infalling person, however, nothing unusual happens at the event horizon. She would experience no slowing of clocks, nor see any evidence that she is passing through the event horizon of a black hole. Her passing the event horizon is simply the last moment in her history at which a light signal she emits would be able to escape from the black hole. The concept of an event horizon is a global concept that depends on how the events on the event horizon relate to the overall structure of the spacetime. Locally there is nothing noteworthy about the events at the event horizon. If the black hole is fairly small, then the tidal gravitational forces there would be quite strong. This just means that gravitational pull on one's feet, closer to the singularity, would be much stronger than the gravitational pull on one's head. That difference of force would be great enough to pull one apart. For a sufficiently large black hole the difference in gravitation at one's feet and head would be small enough for these tidal forces to be negligible. As in the case of singularties, alternative definitions of black holes have been explored. These definitions typically focus on the one-way nature of the event horizon: things can go in, but nothing can get out. Such accounts have not won widespread support, however, and we have not space here to elaborate on them further.[4] 3.1 The Geometrical Nature of Black Holes One of the most remarkable features of relativistic black holes is that they are purely gravitational entities. A pure black hole spacetime contains no matter whatsoever. It is a “vacuum” solution to the Einstein field equations, which just means that it is a solution of Einstein's gravitational field equations in which the matter density is everywhere zero. (Of course, one can also consider a black hole with matter present.) In pre-relativistic physics we think of gravity as a force produced by the mass contained in some matter. In the context of general relativity, however, we do away with gravitational force, and instead postulate a curved spacetime geometry that produces all the effects we standardly attribute to gravity. Thus a black hole is not a “thing” in spacetime; it is instead a feature of spacetime itself. A careful definition of a relativistic black hole will therefore rely only on the geometrical features of spacetime. We'll need to be a little more precise about what it means to be “a region from which nothing, not even light, can escape.” First, there will have to be someplace to escape to if our definition is to make sense. The most common method of making this idea precise and rigorous employs the notion of “escaping to infinity.” If a particle or light ray cannot “travel arbitrarily far” from a definite, bounded region in the interior of spacetime but must remain always in the region, the idea is, then that region is one of no escape, and is thus a black hole. The boundary of the region is called the event horizon. Once a physical entity crosses the event horizon into the hole, it never crosses it again. Second, we will need a clear notion of the geometry that allows for “escape,” or makes such escape impossible. For this, we need the notion of the “causal structure” of spacetime. At any event in the spacetime, the possible trajectories of all light signals form a cone (or, more precisely, the four-dimensional analog of a cone). Since light travels at the fastest speed allowed in the spacetime, these cones map out the possible causal processes in the spacetime. If an occurence at an event A is able to causally affect another occurence at event B, there must be a continuous trajectory in spacetime from event A to event B such that the trajectory lies in or on the lightcones of every event along it. (For more discussion, see the Supplementary Document: Lightcones and Causal Structure.) Figure 1 is a spacetime diagram of a sphere of matter collapsing down to form a black hole. The curvature of the spacetime is represented by the tilting of the light cones away from 45 degrees. Notice that the light cones tilt inwards more and more as one approaches the center of the black hole. The jagged line running vertically up the center of the diagram depicts the black hole central singularity. As we emphasized in Section 1, this is not actually part of the spacetime, but might be thought of as an edge of space and time itself. Thus, one should not imagine the possibility of traveling through the singularity; this would be as nonsensical as something's leaving the diagram (i.e., the spacetime) altogether. Spacetime diagram of black hole formation Figure 1: A spacetime diagram of black hole formation What makes this a black hole spacetime is the fact that it contains a region from which it is impossible to exit while traveling at or below the speed of light. This region is marked off by the events at which the outside edge of the forward light cone points straight upward. As one moves inward from these events, the light cone tilts so much that one is always forced to move inward toward the central singularity. This point of no return is, of course, the event horizon; and the spacetime region inside it is the black hole. In this region, one inevitably moves towards the singularity; the impossibility of avoiding the singularity is exactly like the impossibility of preventing ourselves from moving forward in time. Notice that the matter of the collapsing star disappears into the black hole singularity. All the details of the matter are completely lost; all that is left is the geometrical properties of the black hole which can be identified with mass, charge, and angular momentum. Indeed, there are so-called “no-hair” theorems which make rigorous the claim that a black hole in equilibrium is entirely characterized by its mass, its angular momentum, and its electric charge. This has the remarkable consequence that no matter what the particulars may be of any body that collapses to form a black hole—it may be as intricate, complicated and Byzantine as one likes, composed of the most exotic materials—the final result after the system has settled down to equilibrium will be identical in every respect to a black hole that formed from the collapse of any other body having the same total mass, angular momentum and electric charge. For this reason Chandrasekhar (1983) called black holes “the most perfect objects in the universe.” 4. Naked Singularities and the Cosmic Censorship Hypothesis While spacetime singularities in general are frequently viewed with suspicion, physicists often offer the reassurance that we expect most of them to be hidden away behind the event horizons of black holes. Such singularities therefore could not affect us unless we were actually tojump into the black hole. A “naked” singularity, on the other hand, is one that is not hidden behind an event horizon. Such singularities appear much more threatening because they are uncontained, accessible to vast areas of spacetime. The heart of the worry is that singular structure would seem to signify some sort of breakdown in the fundamental structure of spacetime to such a profound depth that it could wreak havoc on any region of universe that it were visible to. Because the structures that break down in singular spacetimes are required for the formulation of our known physical laws in general, and of initial-value problems for individual physical systems in particular, one such fear is that determinism would collapse entirely wherever the singular breakdown were causally visible. As Earman (1995, pp. 65-6) characterizes the worry, nothing would seem to stop the singularity from “disgorging” any manner of unpleasant jetsam, from TVs showing Nixon's Checkers Speech to old lost socks, in a way completely undetermined by the state of spacetime in any region whatsoever, and in such a way as to render strictly indeterminable all regions in causal contact with what it spews out. One form that such a naked singularity could take is that of a white hole, which is a time-reversed black hole. Imagine taking a film of a black hole forming, and various astronauts, rockets, etc. falling into it. Now imagine that film being run backwards. This is the picture of a white hole: one starts with a naked singularity, out of which might appear people, artifacts, and eventually a star bursting forth. Absolutely nothing in the causal past of such a white hole would determine what would pop out of it (just as items that fall into a black hole leave no trace on the future). Because the field equations of general relativity do not pick out a preferred direction of time, if the formation of a black hole is allowed by the laws of spacetime and gravity, then white holes will also be permitted by these laws. Roger Penrose famously suggested that although naked singularties are comaptible with general relativity, in physically realistic situations naked singularities will never form; that is, any process that results in a singularity will safely deposit that singularity behind an event horizon. This suggestion, titled the “Cosmic Censorship Hypothesis,” has met with a fair degree of success and popularity; however, it also faces several difficulties. Penrose's original formulation relied on black holes: a suitably generic singularity will always be contained in a black hole (and so causally invisible outside the black hole). As the counter-examples to various ways of articulating the hypothesis in terms of this idea have accumulated over the years, it has gradually been abandoned. More recent approaches either begin with an attempt to provide necessary and sufficient conditions for cosmic censorship itself, yielding an indirect characterization of a naked singularity as any phenomenon violating those conditions, or else they begin with an attempt to provide a characterization of a naked singularity and so conclude with a definite statement of cosmic censorship as the absence of such phenomena. The variety of proposals made using both approaches is too great to canvass here; the interested reader is referred to Joshi (2003) for a review of the current state of the art, and to Earman (1995, ch. 3) for a philosophical discussion of many of the proposals. 5. Quantum Black Holes The challenge of uniting quantum theory and general relativity in a successful theory of quantum gravity has arguably been the greatest challenge facing theoretical physics for the past eighty years. One avenue that has seemed particularly promising here is the attempt to apply quantum theory to black holes. This is in part because, as completely gravitational entities, black holes present an especially pure case to study the quantization of gravity. Further, because the gravitational force grows without bound as one nears a standard black hole singularity, one would expect quantum gravitational effects (which should come into play at extremely high energies) to manifest themselves in black holes. Studies of quantum mechanics in black hole spacetimes have revealed several surprises that threaten to overturn our traditional views of space, time, and matter. A remarkable parallel between the laws of black hole mechanics and the laws of thermodynamics indicates that spacetime and thermodynamics may be linked in a fundamental (and previously unimagined) way. This linkage hints at a fundamental limitation on how much entropy can be contained in a spatial region. A further topic of foundational importance is found in the so-called information loss paradox, which suggests that standard quantum evolution will not hold when black holes are present. While many of these suggestions are somewhat speculative, they nevertheless touch on deep issues in the foundations of physics. 5.1 Black Hole Thermodynamics In the early 1970s, Bekenstein argued that the second law of thermodynamics requires one to assign a finite entropy to a black hole. His worry was that one could collapse any amount of highly entropic matter into a black hole — which, as we have emphasized, is an extremely simple object — leaving no trace of the original disorder. This seems to violate the second law of thermodynamics, which asserts that the entropy (disorder) of a closed system can never decrease. However, adding mass to a black hole will increase its size, which led Bekenstein to suggest that the area of a black hole is a measure of its entropy. This conviction grew when, in 1972, Hawking proved that the surface area of a black hole, like the entropy of a closed system, can never decrease. The similarity between black holes and thermodynamic systems was considerably strengthened when Bardeen, Carter, and Hawking (1973) proved three other laws of black hole mechanics that parallel exactly the first, third, and “zeroth” laws of thermodynamics. Although this parallel was extremely suggestive, taking it seriously would require one to assign a non-zero temperature to a black hole, which all then agreed was absurd: All hot bodies emit thermal radiation (like the heat given off from a stove). However, according to general relativity, a black hole ought to be a perfect sink for energy, mass, and radiation, insofar as it absorbs everything (including light), and emits nothing (including light). The only temperature one might be able to assign it would be absolute zero. This obvious fact was overthrown when Hawking (1974, 1975) demonstrated that black holes are not completely “black” after all. His analysis of quantum fields in black hole spacetimes revealed that the black holes will emit particles: black holes generate heat at a temperature that is inversely proportional to their mass and directly proportional to their so-called surface gravity. It glows like a lump of smoldering coal even though light should not be able to escape from it! The temperature of this “Hawking effect” radiation is extremely low for stellar-scale black holes, but for very small black holes the temperatures would be quite high. This means that a very small black hole should rapidly evaporate away, as all of its mass-energy is emitted in high-temperature Hawking radiation. These results were taken to establish that the parallel between the laws of black hole mechanics and the laws of thermodynamics was not a mere fluke: it seems they really are getting at the same deep physics. The Hawking effect establishes that the surface gravity of a black hole can indeed be interpreted as a physical temperature. Further, mass in black hole mechanics is mirrored by energy in thermodynamics, and we know from relativity theory that mass and energy are actually equivalent. Connecting the two sets of laws also requires linking the surface area of a black hole with entropy, as Bekenstein had suggested. This black hole entropy is called its Bekenstein entropy, and is proportional to the area of the event horizon of the black hole. 5.2 The Generalized Second Law of Thermodynamics In the context of thermodynamic systems containing black holes, one can construct apparent violations of the laws of thermodynamics, and of the laws of black hole mechanics, if one considers these laws to be independent of each other. So for example, if a black hole gives off radiation through the Hawking effect, then it will lose mass – in apparent violation of the area increase theorem. Likewise, as Bekenstein argued, we could violate the second law of thermodynamics by dumping matter with high entropy into a black hole. However, the price of dropping matter into the black hole is that its event horizon will increase in size. Likewise, the price of allowing the event horizon to shrink by giving off Hawking radiation is that the entropy of the external matter fields will go up. We can consider a combination of the two laws that stipulates that the sum of a black hole's area, and the entropy of the system, can never decrease. This is the generalized second law of (black hole) thermodynamics. From the time that Bekenstein first proposed that the area of a black hole could be a measure of its entropy, it was know to face difficulties that appeared insurmountable. Geroch (1971) proposed a scenario that seems to allow a violation of the generalized second law. If we have a box full of energetic radiation with a high entropy, that box will have a certain weight as it is attracted by the gravitational force of a black hole. One can use this weight to drive an engine to produce energy (e.g., to produce electricity) while slowly lowering the box towards the event horizon of the black hole. This process extracts energy, but not entropy, from the radiation in the box; once the box reaches the event horizon itself, it can have an arbitrarily small amount of energy remaining. If one then opens the box to let the radiation fall into the black hole, the size of the event horizon will not increase any appreciable amount (because the mass-energy of the black hole has barely been increased), but the thermodynamic entropy outside the black hole has decreased. Thus we seem to have violated the generalized second law. The question of whether we should be troubled by this possible violation of the generalized law touches on several issues in the foundations of physics. The status of the ordinary second law of thermodynamics is itself a thorny philosophical puzzle, quite apart from the issue of black holes. Many physicists and philosophers deny that the ordinary second law holds universally, so one might question whether we should insist on its validity in the presence of black holes. On the other hand, the second law clearly captures some significant feature of our world, and the analogy between black hole mechanics and thermodynamics seems too rich to be thrown out without a fight. Indeed, the generalized second law is our only law that joins together the fields of general relativity, quantum mechanics, and thermodynamics. As such, it seems the most promising window we have into the truly fundamental nature of the physical world. 5.2.1 Entropy Bounds and the Holographic Principle In response to this apparent violation of the generalized second law, Bekenstein pointed out that one could never get all of the radiation in the box arbitrarily close to the event horizon, because the box itself would have to have some volume. This observation by itself is not enough to save the second law, however, unless there is some limit to how much entropy can be contained in a given volume of space. Current physics poses no such limit, so Bekenstein (1981) postulated that the limit would be enforced by the underlying theory of quantum gravity, which black hole thermodynamics is providing a glimpse of. However, Unruh and Wald (1982) argue that there is a less ad hoc way to save the generalized second law. The heat given off by any hot body, including a black hole, will produce a kind of “buoyancy” force on any object (like our box) that blocks thermal radiation. This means that when we are lowering our box of high-entropy radiation towards the black hole, the optimal place to release that radiation will not be just above the event horizon, but rather at the “floating point” for the container. Unruh and Wald demonstrate that this fact is enough guarantee that the decrease in outside entropy will be compensated by an increase in the area of the event horizon. It therefore seems that there is no reliable way to violate the generalized second law of black hole thermodynamics. There is, however, a further reason that one might think that black hole thermodynamics implies a fundamental bound on the amount of entropy that can be contained in a region. Suppose that there were more entropy in some region of space than the Bekenstein entropy of a black hole of the same size. Then one could collapse that entropic matter into a black hole, which obviously could not be larger than the size of the original region (or the mass-energy would have already formed a black hole). But this would violate the generalized second law, for the Bekenstein entropy of a the resulting black hole would be less than that of the matter that formed it. Thus the second law appears to imply a fundamental limit on how much entropy a region can contain. If this is right, it seems to be a deep insight into the nature of quantum gravity. Arguments along these lines led ‘t Hooft (1985) to postulate the “Holographic Principle” (though the title is due to Susskind). This principle claims that the number of fundamental degrees of freedom in any spherical region is given by the Bekenstein entropy of a black hole of the same size as that region. The Holographic Principle is notable not only because it postulates a well-defined, finite, number of degrees of freedom for any region, but also because this number grows as the area surrounding the region, and not as the volume of the region. This flies in the face of standard physical pictures, whether of particles or fields. According to that picture, the entropy is the number of possible ways something can be, and that number of ways increases as the volume of any spatial region. The Holographic Principle does get some support from a result in string theory known as the “AdS/CFT correspondence.” If the Principle is correct, then one spatial dimension can, in a sense, be viewed as superfluous: the fundamental physical story of a spatial region is actually a story that can be told merely about the boundary of the region. 5.2.2 What Does Black Hole Entropy Measure? In classical thermodynamics, that a system possesses entropy is often attributed to the fact that we in practice are never able to render to it a “complete” description. When describing a cloud of gas, we do not specify values for the position and velocity of every molecule in it; we rather describe it in terms of quantities, such as pressure and temperature, constructed as statistical measures over underlying, more finely grained quantities, such as the momentum and energy of the individual molecules. The entropy of the gas then measures the incompleteness, as it were, of the gross description. In the attempt to take seriously the idea that a black hole has a true physical entropy, it is therefore natural to attempt to construct such a statistical origin for it. The tools of classical general relativity cannot provide such a construction, for it allows no way to describe a black hole as a system whose physical attributes arise as gross statistical measures over underlying, more finely grained quantities. Not even the tools of quantum field theory on curved spacetime can provide it, for they still treat the black hole as an entity defined entirely in terms of the classical geometry of the spacetime. Any such statistical accounting, therefore, must come from a theory that attributes to the classical geometry a description in terms of an underlying, discrete collection of micro-states. Explaining what these states are that are counted by the Bekenstein entropy has been a challenge that has been eagerly pursued by quantum gravity researchers. In 1996, superstring theorists were able to give an account of how M-theory (which is an extension of superstring theory) generates a number of the string-states for a certain class of black holes, and this number matched that given by the Bekenstein entropy (Strominger and Vafa, 1996). A counting of black hole states using loop quantum gravity has also recovered the Bekenstein entropy (Ashtekar et al., 1998). It is philosophically noteworthy that this is treated as a significant success for these theories (i.e., it is presented as a reason for thinking that these theories are on the right track) even though Hawking radiation has never been experimentally observed (in part, because for macroscopic black holes the effect is minute). 5.3 Information Loss Paradox Hawking's discovery that black holes give off radiation presented an apparent problem for the possibility of describing black holes quantum mechanically. According to standard quantum mechanics, the entropy of a closed system never changes; this is captured formally by the “unitary” nature of quantum evolution. Such evolution guarantees that the initial conditions, together with the quantum Schrödinger equation, will fix the future state of the system. Likewise, a reverse application of the Schrödinger equation will take us from the later state back to the original initial state. The states at each time are rich enough, detailed enough, to fix (via the dynamical equations) the states at all other times. Thus there is a sense in which the completeness of the state is maintained by unitary time evolution. It is typical to characterize this feature with the claim that quantum evolution “preserves information.” If one begins with a system in a precisely known quantum state, then unitary evolution guarantees that the details about that system will evolve in such a way that one can infer the precise quantum state of the system at some later time (as long as one knows the law of evolution and can perform the relevant calculations), and vice versa. This quantum preservation of details implies that if we burn a chair, for example, it would in principle be possible to perform a complete set of measurements on all the outgoing radiation, the smoke, and the ashes, and reconstruct exactly what the chair looked like. However, if we were instead to throw the chair into a black hole, then it would be physically impossible for the details about the chair ever to escape to the outside universe. This might not be a problem if the black hole continued to exist for all time, but Hawking tells us that the black hole is giving off energy, and thus it will shrink down and presumably will eventually disappear altogether. At that point, the details about the chair will be irrevocably lost; thus such evolution cannot be described unitarily. This problem has been labeled the “information loss paradox” of quantum black holes. (A brief technical explanation for those familiar with quantum mechanics: The argument is simply that the interior and the exterior of the black hole will generally be entangled. However, microcausality implies that the entangled degrees of freedom in the black hole cannot coherently recombine with the external universe. Thus once the black hole has completely evaporated away, the entropy of the universe will have increased — in violation of unitary evolution.) The attitude physicists adopted towards this paradox was apparently strongly influenced by their vision of which theory, general relativity or quantum theory, would have to yield to achieve a consistent theory of quantum gravity. Spacetime physicists tended to view non-unitary evolution as a fairly natural consequence of singular spacetimes: one wouldn't expect all the details to be available at late times if they were lost in a singularity. Hawking, for example, argued that the paradox shows that the full theory of quantum gravity will be a non-unitary theory, and he began working to develop such a theory. (He has since abandoned this position.) However, particle physicists (such as superstring theorists) tended to view black holes as being just another quantum state. If two particles were to collide at extremely high (i.e., Planck-scale) energies, they would form a very small black hole. This tiny black hole would have a very high Hawking temperature, and thus it would very quickly give off many high-energy particles and disappear. Such a process would look very much like a standard high-energy scattering experiment: two particles collide and their mass-energy is then converted into showers of outgoing particles. The fact that all known scattering processes are unitary then seems to give us some reason to expect that black hole formation and evaporation should also be unitary. These considerations led many physicists to propose scenarios that might allow for the unitary evolution of quantum black holes, while not violating other basic physical principles, such as the requirement that no physical influences be allowed to travel faster than light (the requirement of “microcausality”), at least not when we are far from the domain of quantum gravity (the “Planck scale”). Once energies do enter the domain of quantum gravity, e.g. near the central singularity of a black hole, then we might expect the classical description of spacetime to break down; thus, physicists were generally prepared to allow for the possibility of violations of microcausality in this region. A very helpful overview of this debate can be found in Belot, Earman, and Ruetsche (1999). Most of the scenarios proposed to escape Hawking's argument faced serious difficulties and have been abandoned by their supporters. The proposal that currently enjoys the most wide-spread (though certainly not universal) support is known as “black hole complementarity.” This proposal has been the subject of philosophical controversy because it includes apparently incompatible claims, and then tries to escape the contradiction by making a controversial appeal to quantum complementarity or (so charge the critics) verificationism. 5.3.1 Black Hole Complementarity The challenge of saving information from a black hole lies in the fact that it is impossible to copy the quantum details (especially the quantum correlations) that are preserved by unitary evolution. This implies that if the details pass behind the event horizon, for example, if an astronaut falls into a black hole, then those details are lost forever. Advocates of black hole complementarity (Susskind et al. 1993), however, point out that an outside observer will never see the infalling astronaut pass through the event horizon. Instead, as we saw in Section 2, she will seem to hover at the horizon for all time. But all the while, the black hole will also be giving off heat, and shrinking down, and getting hotter, and shrinking more. The black hole complementarian therefore suggests that an outside observer should conclude that the infalling astronaut gets burned up before she crosses the event horizon, and all the details about her state will be returned in the outgoing radiation, just as would be the case if she and her belongings were incinerated in a more conventional manner; thus the information (and standard quantum evolution) is saved. However, this suggestion flies in the face of the fact (discussed earlier) that for an infalling observer, nothing out of the ordinary should be experienced at the event horizon. Indeed, for a large enough black hole, one wouldn't even know that she was passing through an event horizon at all. This obviously contradicts the suggestion that she might be burned up as she passes through the horizon. The black hole complementarian tries to resolve this contradiction by agreeing that the infalling observer will notice nothing remarkable at the horizon. This is followed by a suggestion that the account of the infalling astronaut should be considered to be “complementary” to the account of the external observer, rather in the same way that position and momentum are complementary descriptions of quantum particles (Susskind et al. 1993). The fact that the infalling observer cannot communicate to the external world that she survived her passage through the event horizon is supposed to imply that there is no genuine contradiction here. This solution to the information loss paradox has been criticized for making an illegitimate appeal to verificationism (Belot, Earman, and Ruetsche 1999). However, the proposal has nevertheless won wide-spread support in the physics community, in part because models of M-theory seem to behave somewhat as the black hole complementarian scenario suggests (for a philosophical discussion, see van Dongen and de Haro 2004). Bokulich (2005) argues that the most fruitful way of viewing black hole complementarity is as a novel suggestion for how a non-local theory of quantum gravity will recover the local behavior of quantum field theory when black holes are involved. 6. Conclusion: Philosophical Issues The physical investigation of spacetime singularities and black holes has touched on numerous philosophical issues. To begin, we were confronted with the question of the definition and significance of singularities. Should they be defined in terms of incomplete paths, missing points, or curvature pathology? Should we even think that there is a single correct answer to this question? Need we include such things in our ontology, or do they instead merely indicate the break-down of a particular physical theory? Are they “edges” of spacetime, or merely inadequate descriptions that will be dispensed with by a truly fundamental theory of quantum gravity? This has obvious connections to the issue of how we are to interpret the ontology of merely effective physical descriptions. The debate over the information loss paradox also highlights the conceptual importance of the relationship between different effective theories. At root, the debate is over where and how our effective physical theories will break down: when can they be trusted, and where must they be replaced by a more adequate theory? Black holes appear to be crucial for our understanding of the relationship between matter and spacetime. As discussed in Section 3, When matter forms a black hole, it is transformed into a purely gravitational entity. When a black hole evaporates, spacetime curvature is transformed into ordinary matter. Thus black holes offer an important arena for investigating the ontology of spacetime and ordinary objects. Black holes were also seen to provide an important testing ground to investigate the conceptual problems underlying quantum theory and general relativity. The question of whether black hole evolution is unitary raises the issue of how the unitary evolution of standard quantum mechanics serves to guarantee that no experiment can reveal a violation of energy conservation or of microcausality. Likewise, the debate over the information loss paradox can be seen as a debate over whether spacetime or an abstract dynamical state space (Hilbert space) should be viewed as being more fundamental. Might spacetime itself be an emergent entity belonging only to an effective physical theory? Singularities and black holes are arguably our best windows into the details of quantum gravity, which would seem to be the best candidate for a truly fundamental physical description of the world (if such a fundamental description exists). As such, they offer glimpses into deepest nature of matter, dynamical laws, and space and time; and these glimpses seem to call for a conceptual revision at least as great as that required by quantum mechanics or relativity theory alone. • Ashtekar A, J. Baez, A. Corichi, and K. Krasnov, 1998, “Quantum Geometry and Black Hole Entropy,” Physical Review Letters, 80: 904. • Ashtekar, A. and M. Bojowald, 2006, “Quantum Geometry and the Schwarzschild Singularity,” Classical and Quantum Gravity, 23: 391-411. • Ashtekar, A., T. Pawlowski and P. Singh, 2006, “Quantum Nature of the Big Bang,” Physical Review Letters, 96: 141301. • Bardeen, J. M., B. Carter, and S. W. Hawking, 1973, “The Four Laws of Black Hole Mechanics”, Communications in Mathematical Physics, 31: 161-170. • Bekenstein, J. D., 1973, “Black Holes and Entropy.” Physical Review D 7: 2333-2346. • Bekenstein, J. D., 1981, “Universal Upper Bound on the Entropy-to-Energy Ratio for Bounded Systems.” Physical Review D 23: 287-298. • Belot, G., Earman, J., and Ruetsche, L., 1999, “The Hawking Information Loss Paradox: The Anatomy of a Controversy”, British Journal for the Philosophy of Science, 50: 189-229. • Bergmann, P., 1977, “Geometry and Observables,” in Earman, Glymour and Stachel (1977), 275-280. • Bergmann, P. and A. Komar, 1962, “Observables and Commutation Relations,” in A. Lichnerowicz and A. Tonnelat, eds., Les Théories Relativistes de la Gravitation, CNRS: Paris, 309-325. • Bertotti, B., 1962, “The Theory of Measurement in General Relativity,” in C. Møller, ed., Evidence for Gravitational Theories, “Proceedings of the International School of Physics ‘Enrico Fermi,’” Course XX, Academic Press: New York, 174-201. • Bokulich, P., 2001, “Black Hole Remnants and Classical vs. Quantum Gravity”, Philosophy of Science, 68: S407-S423. • Bokulich, P., 2005, “Does Black Hole Complementarity Answer Hawking's Information Loss Paradox?”, Philosophy of Science, 72: 1336-1349. • Chandrasekhar, S., 1983, The Mathematical Theory of Black Holes, Oxford: Oxford University Press • Clarke, C., 1993, The Analysis of Space-Time Singularities, Cambridge: Cambridge University Press • Coleman, R. and H. Korté, 1992, “The Relation between the Measurement and Cauchy Problems of GTR”, in H. Sato and T. Nakamura, eds., Proceedings of the 6th Marcel Grossmann Meeting on General Relativity World Scientific Press, Singapore, 97–119. Proceedings of the meeting held at Kyoto International Conference Hall, Kyoto, Japan, 23-29 June 1991. • Curiel, E., 1998, “The Analysis of Singular Spacetimes”, Philosophy of Science, 66: S119-S145 • Earman, J., 1995, Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, New York: Oxford University Press • Earman, J., C. Glymour and J. Stachel, eds., 1977, Foundations of Space-Time Theories, Minnesota Studies in Philosophy of Science, vol.VIII, University of Minnesota: Minneapolis • Ellis, G. and B. Schmidt, 1977, “Singular Space-Times,” General Relativity and Gravitation, 8: 915-953 • Geroch, R., 1968, “What Is a Singularity in General Relativity?” Annals of Physics, 48: 526-40 • Geroch, R., 1968, “Local Characterization of Singularities in General Relativity”, Journal of Mathematical Physics, 9: 450-465 • Geroch, R., 1970, “Singularities”, in Relativity, eds. M. Carmeli, S. Fickler and L. Witten, New York: Plenum Press, pp. 259-291 • Geroch, R., 1971, remarks made at a colloquium in Princeton, as reported by, among others, Israel (1987, 263). • Geroch, R., 1977, “Prediction in General Relativity”, in Earman, J. and C. Glymour and J. Stachel, eds., Foundations of Spacetime Theories (Minnesota Studies in the Philosophy of Science, vol. 18, Minneapolis: University of Minnesota Press, 1977), pp. 81-93 • Geroch, R., 1981, General Relativity from A to B, Chicago: University of Chicago Press • Geroch, R., 1985, Mathematical Physics, Chicago: University of Chicago Press • Geroch, R. and L. Can-bin and R. Wald, 1982, “Singular Boundaries of Space-times” Journal of Mathematical Physics, 23: 432-435 • Geroch, R. and E. Kronheimer and R. Penrose, 1972, “Ideal Points in Space-time” Philosophical Transactions of the Royal Society (London), A327: 545-567 • Hawking, S., 1967, “The Occurrence of Singularities in Cosmology. III”, Philosophical Transactions of the Royal Society (London), A300: 187-210 • Hawking, S. W., 1974, “Black Hole Explosions?”, Nature, 248: 30-31. • Hawking, S. W., 1975, “Particle Creation by Black Holes”, Communications in Mathematical Physics 43: 199-220. • Hawking, S. W., 1976, “The Breakdown of Predictability in Gravitational Collapse”, Physical Review D, 14: 2460-2473. • Hawking, S. W., 1982, “The Unpredictability of Quantum Gravity”, Communications in Mathematical Physics, 87: 395-415. • Hawking, S. and G. Ellis, 1973, The Large Scale Structure of Space-Time, Cambridge: Cambridge University Press • Israel, W., 1987, “Dark Stars: The Evolution of an Idea,” in S. Hawking and W. Israel, eds., 300 Years of Gravitation, Cambridge: Cambridge University Press, 199-276. • Joshi, P., 1993, Global Aspects in Gravitation and Cosmology, Oxford: Clarendon Press. • Joshi, P., 2003, “Cosmic Censorship: A Current Perspective,” Modern Physics Letters A, 17: 1067-1079. • Kiem, Y., H. Verlinde, and E. Verlinde, 1995, “Black Hole Horizons and Complementarity.” Physical Review D, 52: 7053-7065. • Laplace, P., 1796, Exposition du System du Monde, Paris: Cercle-Social. • Lowe, D., J. Polchinski, L. Susskind, L. Thorlacius, and J. Uglum, 1995, “Black Hole Complementarity versus Locality”, Physical Review D, 52: 6997-7010. • Lowe, D. and L. Thorlacius, 1999, “AdS/CFT and the Information Paradox”, Physical Review D, 60: 104012-1 to 104012-7. • Michell, J., 1784, “On the Means of discovering the Distance, Magnitude, etc. of the Fixed Stars, in consequence of the Diminution of the velocity of their Light, in case such a Diminution should be found to take place in any of them, and such Data should be procurred from Observations, as would be farther necessary for that Purpose”, Philosophical Transactions, 74: 35-57. • Misner, C. and Thorne, K. and Wheeler, J., 1973, Gravitation, Freeman Press: San Francisco • Penrose, R., 1969, “Gravitational Collapse: The Role of General Relativity”, Revista del Nuovo Cimento, 1:272-276 • Rovelli, C., 1991, “What Is Observable in Classical and Quantum Gravity?,” Classical and Quantum Gravity, 8:297-316. • Rovelli, C., 2001, “A Note on the Foundation of Relativistic Mechanics. I: Relativistic Observables and Relativistic States,” available as arXiv:gr-qc/0111037v2. • Rovelli, C., 2002a, “GPS Observables in General Relativity,” Physical Review D, 65:044017. • Rovelli, C., 2002b, “Partial Observables,” Physical Review D, 65:124013. • Rovelli, C., 2004, Quantum Gravity, Cambridge University Press: Cambridge. • Stephans, C. R., G. ‘t Hooft, and B. F. Whiting, 1994, “Black Hole Evaporation Without Information Loss”, Classical and Quantum Gravity, 11: 621-647. • Strominger, A. and C. Vafa, 1996, “Microscopic Origin of the Bekenstein-Hawking Entropy”, Physics Letters B, 379: 99-104. • Susskind, L., 1995, “The World as a Hologram.” Journal for Mathematical Physics, 36: 6377-6396. • Susskind, L., 1997, “Black Holes and the Information Paradox.” Scientific American, 272, 4, April: 52-57. • Susskind, L. and L. Thorlacius, 1994, “Gedanken Experiments Involving Black Holes”, Physical Review D, 49: 966-974. • Susskind, L., L. Thorlacius, and J. Uglum, 1993 “The Stretched Horizon and Black Hole Complementarity”, Physical Review D, 48: 3743-3761. • Susskind, L., and J. Uglum, 1996, “String Physics and Black Holes”, Nuclear Physics B (Proceedings Supplement), 45: 115-134. • 't Hooft, G., 1985, “On the Quantum Structure of a Black Hole”, Nuclear Physics B, 256: 727-745. • 't Hooft, G., 1996, “The Scattering Matrix Approach for the Quantum Black Hole: an Overview”, International Journal of Modern Physics A, 11: 4623-4688. • Thorlacius, L., 1995, “Black Hole Evolution”, Nuclear Physics B (Proceedings Supplement), 41: 245-275. • Thorne, K., 1995, Black Holes and Time Warps: Einstein's Outrageous Legacy, New York: W. W. Norton and Co. • Thorne, K., R. Price, and D. Macdonald, 1986, Black Holes: The Membrane Paradigm, New Haven: Yale University Press. • Unruh, W., 1976, “Notes on Black Hole Evaporation”, Physical Review D, 14: 870-892. • Unruh, W. R. M. Wald, 1982, “Acceleration Radiation and the Generalized Second Law of Thermodynamics”, Physical Review D, 25: 942-958. • Unruh, W. R. M. Wald, 1995, “Evolution Laws Taking Pure States to Mixed States in Quantum Field Theory”, Physical Review D, 52: 2176-2182. • van Dongen, J. and S. de Haro, 2004, “On Black Hole Complementarity”, Studies in History and Philosophy of Modern Physics, 35: 509-525. • Wald, R. M., 1984, General Relativity, Chicago: University of Chicago Press. • Wald, R., 1992, Space, Time, and Gravity: The Theory of the Big Bang and Black Holes, second edition, Chicago: University of Chicago Press • Wald, R. M., 1994, Quantum Field Theory in Curved Spacetimes and Black Hole Thermodynamics, Chicago: University of Chicago Press. • Wald, R. M., 2001, “The Thermodynamics of Black Holes”, Living Reviews in Relativity 4(6): 1-44. URL = <>. Other Internet Resources The SEP editors would like to thank John D. Norton, the subject editor for this entry, for the special effort he made in refereeing and guiding this entry towards publication. Copyright © 2009 by Erik Curiel <> Peter Bokulich <> Please Read How You Can Help Keep the Encyclopedia Free
97af117948d52e30
Page semi-protected Listen to this article From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the chemistry of hydrogen. For the physics of atomic hydrogen, see Hydrogen atom. For other uses, see Hydrogen (disambiguation). Hydrogen,  1H Hydrogen discharge tube.jpg Purple glow in its plasma state Hydrogen Spectra.jpg Spectral lines of hydrogen General properties Name, symbol hydrogen, H Pronunciation /ˈhdrəən/[1] Appearance colorless gas Hydrogen in the periodic table Hydrogen (diatomic nonmetal) Helium (noble gas) Lithium (alkali metal) Beryllium (alkaline earth metal) Boron (metalloid) Carbon (polyatomic nonmetal) Nitrogen (diatomic nonmetal) Oxygen (diatomic nonmetal) Fluorine (diatomic nonmetal) Neon (noble gas) Sodium (alkali metal) Magnesium (alkaline earth metal) Aluminium (post-transition metal) Silicon (metalloid) Phosphorus (polyatomic nonmetal) Sulfur (polyatomic nonmetal) Chlorine (diatomic nonmetal) Argon (noble gas) Potassium (alkali metal) Calcium (alkaline earth metal) Scandium (transition metal) Titanium (transition metal) Vanadium (transition metal) Chromium (transition metal) Manganese (transition metal) Iron (transition metal) Cobalt (transition metal) Nickel (transition metal) Copper (transition metal) Zinc (transition metal) Gallium (post-transition metal) Germanium (metalloid) Arsenic (metalloid) Selenium (polyatomic nonmetal) Bromine (diatomic nonmetal) Krypton (noble gas) Rubidium (alkali metal) Strontium (alkaline earth metal) Yttrium (transition metal) Zirconium (transition metal) Niobium (transition metal) Molybdenum (transition metal) Technetium (transition metal) Ruthenium (transition metal) Rhodium (transition metal) Palladium (transition metal) Silver (transition metal) Cadmium (transition metal) Indium (post-transition metal) Tin (post-transition metal) Antimony (metalloid) Tellurium (metalloid) Iodine (diatomic nonmetal) Xenon (noble gas) Caesium (alkali metal) Barium (alkaline earth metal) Lanthanum (lanthanide) Cerium (lanthanide) Praseodymium (lanthanide) Neodymium (lanthanide) Promethium (lanthanide) Samarium (lanthanide) Europium (lanthanide) Gadolinium (lanthanide) Terbium (lanthanide) Dysprosium (lanthanide) Holmium (lanthanide) Erbium (lanthanide) Thulium (lanthanide) Ytterbium (lanthanide) Lutetium (lanthanide) Hafnium (transition metal) Tantalum (transition metal) Tungsten (transition metal) Rhenium (transition metal) Osmium (transition metal) Iridium (transition metal) Platinum (transition metal) Gold (transition metal) Mercury (transition metal) Thallium (post-transition metal) Lead (post-transition metal) Bismuth (post-transition metal) Polonium (post-transition metal) Astatine (metalloid) Radon (noble gas) Francium (alkali metal) Radium (alkaline earth metal) Actinium (actinide) Thorium (actinide) Protactinium (actinide) Uranium (actinide) Neptunium (actinide) Plutonium (actinide) Americium (actinide) Curium (actinide) Berkelium (actinide) Californium (actinide) Einsteinium (actinide) Fermium (actinide) Mendelevium (actinide) Nobelium (actinide) Lawrencium (actinide) Rutherfordium (transition metal) Dubnium (transition metal) Seaborgium (transition metal) Bohrium (transition metal) Hassium (transition metal) Meitnerium (unknown chemical properties) Darmstadtium (unknown chemical properties) Roentgenium (unknown chemical properties) Copernicium (transition metal) Ununtrium (unknown chemical properties) Flerovium (post-transition metal) Ununpentium (unknown chemical properties) Livermorium (unknown chemical properties) Ununseptium (unknown chemical properties) Ununoctium (unknown chemical properties) – ← hydrogenhelium Atomic number (Z) 1 Group, block group 1, s-block Period period 1 Element category   diatomic nonmetal, could be considered metalloid Standard atomic weight (Ar) 1.008[2] (1.00784–1.00811)[3] Electron configuration 1s1 per shell Physical properties Color colorless Phase gas Melting point 13.99 K ​(−259.16 °C, ​−434.49 °F) Boiling point 20.271 K ​(−252.879 °C, ​−423.182 °F) Density at stp (0 °C and 101.325 kPa) 0.08988 g/L when liquid, at m.p. 0.07 g/cm3 (solid: 0.0763 g·cm−3)[4] when liquid, at b.p. 0.07099 g/cm3 Triple point 13.8033 K, ​7.041 kPa Critical point 32.938 K, 1.2858 MPa Heat of fusion (H2) 0.117 kJ/mol Heat of vaporization (H2) 0.904 kJ/mol Molar heat capacity (H2) 28.836 J/(mol·K) vapor pressure at T (K) 15 20 Atomic properties Oxidation states −1, +1 ​(an amphoteric oxide) Electronegativity Pauling scale: 2.20 Ionization energies 1st: 1312.0 kJ/mol Covalent radius 31±5 pm Van der Waals radius 120 pm Crystal structure hexagonal Hexagonal crystal structure for hydrogen Speed of sound 1310 m/s (gas, 27 °C) Thermal conductivity 0.1805 W/(m·K) Magnetic ordering diamagnetic[5] CAS Number 1333-74-0 Discovery Henry Cavendish[6][7] (1766) Named by Antoine Lavoisier[8] (1783) Most stable isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.98% 1H is stable with 0 neutrons 2H 0.02% 2H is stable with 1 neutron 3H trace 12.32 y β 0.01861 3He | references Hydrogen is a chemical element with chemical symbol H and atomic number 1. With an atomic weight of 1.00794 u, hydrogen is the lightest element on the periodic table. Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all baryonic mass.[9][note 1] Non-remnant stars are mainly composed of hydrogen in the plasma state. The most common isotope of hydrogen, termed protium (name rarely used, symbol 1H), has one proton and no neutrons. The universal emergence of atomic hydrogen first occurred during the recombination epoch. At standard temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic, highly combustible diatomic gas with the molecular formula H2. Since hydrogen readily forms covalent compounds with most nonmetallic elements, most of the hydrogen on Earth exists in molecular forms such as water or organic compounds. Hydrogen plays a particularly important role in acid–base reactions because most acid-base reactions involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a negative charge (i.e., anion) when it is known as a hydride, or as a positively charged (i.e., cation) species denoted by the symbol H+. The hydrogen cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds are always more complex. As the only neutral atom for which the Schrödinger equation can be solved analytically,[10] study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics. Hydrogen gas was first artificially produced in the early 16th century by the reaction of acids on metals. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance,[11] and that it produces water when burned, the property for which it was later named: in Greek, hydrogen means "water-former". Industrial production is mainly from steam reforming natural gas, and less often from more energy-intensive methods such as the electrolysis of water.[12] Most hydrogen is used near the site of its production site, the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many metals,[13] complicating the design of pipelines and storage tanks.[14] A black cup-like object hanging by its bottom with blue glow coming out of its opening. The Space Shuttle Main Engine burnt hydrogen with oxygen, producing a nearly invisible flame at full thrust. Hydrogen gas (dihydrogen or molecular hydrogen)[15] is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume.[16] The enthalpy of combustion is −286 kJ/mol:[17] Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The explosive reactions may be triggered by spark, heat, or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F).[18] Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames.[19] The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible orange flames in that incident were the result of a rich mixture of hydrogen to oxygen combined with carbon compounds from the airship skin. Electron energy levels Main article: Hydrogen atom Drawing of a light-gray large sphere with a cut off quarter and a black small sphere and numbers 1.7x10−5 illustrating their relative diameters. Depiction of a hydrogen atom with size of central proton shown, and the atomic diameter shown as about twice the Bohr model radius (image not to scale) The ground state energy level of the electron in a hydrogen atom is −13.6 eV,[21] which is equivalent to an ultraviolet photon of roughly 91 nm wavelength.[22] The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the Sun. However, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies.[23] A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation, Dirac equation or even the Feynman path integral formulation to calculate the probability density of the electron around the proton.[24] The most complicated treatments allow for the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum at all—illustrating how the "planetary orbit" differs from electron motion. Elemental molecular forms Two bright circles on dark background, both contain numerous thin black lines inside. First tracks observed in liquid hydrogen bubble chamber at the Bevatron There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their nuclei.[25] In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular spin quantum number of 1 (12+12); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (1212). At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form".[26] The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but because the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in spin isomers of hydrogen.[27] The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal properties.[28] The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly.[29] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel[30] compounds, are used during hydrogen cooling.[31] Further information: Category:Hydrogen compounds Covalent and organic compounds While H2 is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge.[32] When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with the hydrogen of other similar molecules, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules.[33][34] Hydrogen also forms compounds with less electronegative elements, such as metals and metalloids, where it takes on a partial negative charge. These compounds are often known as hydrides.[35] Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array with heteroatoms that, because of their general association with living things, are called organic compounds.[36] The study of their properties is known as organic chemistry[37] and their study in the context of living organisms is known as biochemistry.[38] By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry.[36] Millions of hydrocarbons are known, and they are usually formed by complicated synthetic pathways that seldom involve elementary hydrogen. Main article: Hydride Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted H, and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like hydrides, was demonstrated by Moers in 1920 by the electrolysis of molten lithium hydride (LiH), producing a stoichiometry quantity of hydrogen at the anode.[39] For hydrides other than group 1 and 2 metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group 2 hydrides is BeH , which is polymeric. In lithium aluminium hydride, the AlH anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary aluminium hydride.[40] Binary indium hydride has not yet been identified, although larger complexes exist.[41] In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes.[42] Protons and acids Further information: Acid–base reaction Oxidation of hydrogen removes its electron and gives H+, which contains no electrons and a nucleus which is usually composed of one proton. That is why H+ is often called a proton. This species is central to discussion of acids. Under the Bronsted-Lowry theory, acids are proton donors, while bases are proton acceptors. A bare proton, H+ , cannot exist in solution or in ionic crystals because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term 'proton' is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted "H+ " without any implication that any single protons exist freely as a species. To avoid the implication of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the "hydronium ion" (H ). However, even in this case, such solvated hydrogen cations are more realistically conceived as being organized into clusters that form species closer to H .[43] Other oxonium ions are found when water is in acidic solution with other solvents.[44] Although exotic on Earth, one of the most common ions in the universe is the H+ ion, known as protonated molecular hydrogen or the trihydrogen cation.[45] Main article: Isotopes of hydrogen Hydrogen discharge (spectrum) tube Deuterium discharge (spectrum) tube Schematic drawing of a positive atom in the center orbited by a negative particle. Protium, the most common isotope of hydrogen, has one proton and one electron. Unique among all stable isotopes, it has no neutrons (see diproton for a discussion of why others do not exist). Hydrogen has three naturally occurring isotopes, denoted 1 , 2 and 3 . Other, highly unstable nuclei (4 to 7 ) have been synthesized in the laboratory but not observed in nature.[46][47] • 1 is the most common hydrogen isotope with an abundance of more than 99.98%. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium.[48] • 2 , the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in the nucleus. All deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since that time. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1 -NMR spectroscopy.[49] Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.[50] • 3 is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years.[42] It is so radioactive that it can be used in luminous paint, making it useful in such things as watches. The glass prevents the small amount of radiation from getting out.[51] Small amounts of tritium are produced naturally by the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests.[52] It is used in nuclear fusion reactions,[53] as a tracer in isotope geochemistry,[54] and in specialized self-powered lighting devices.[55] Tritium has also been used in chemical and biological labeling experiments as a radiolabel.[56] Hydrogen is the only element that has different names for its isotopes in common use today. During the early study of radioactivity, various heavy radioactive isotopes were given their own names, but such names are no longer used, except for deuterium and tritium. The symbols D and T (instead of 2 and 3 ) are sometimes used for deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for protium.[57] In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry allows any of D, T, 2 , and 3 to be used, although 2 and 3 are preferred.[58] Discovery and use In 1671, Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.[59][60] In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance called "phlogiston"[61][62] and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element.[6][7] In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek ὑδρο- hydro meaning "water" and -γενής genes meaning "creator")[8] when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned.[7] Antoine-Laurent de Lavoisier Lavoisier produced hydrogen for his experiments on mass conservation by reacting a flux of steam with metallic iron through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions:    Fe +    H2O → FeO + H2 2 Fe + 3 H2O → Fe2O3 + 3 H2 3 Fe + 4 H2O → Fe3O4 + 4 H2 Many metals such as zirconium undergo a similar reaction with water leading to the production of hydrogen. Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask.[7] He produced solid hydrogen the next year.[7] Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck.[6] Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932.[7] François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.[7] The first hydrogen-filled balloon was invented by Jacques Charles in 1783.[7] Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard.[7] German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900.[7] Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on 6 May 1937.[7] The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done. In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co.;[63] because of the thermal conductivity of hydrogen gas, this is the most common type in its field today. The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2).[64] For example, the ISS,[65] Mars Odyssey[66] and the Mars Global Surveyor[67] are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009,[68] more than 19 years after launch and 13 years beyond their design life.[69] Role in quantum theory A line spectrum showing black background with narrow lines superimposed on it: one violet, one blue, one cyan, and one red. Hydrogen emission spectrum lines in the visible range. These are the four visible lines of the Balmer series Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure.[70] Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation H+ brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s. One of the first quantum effects to be explicitly noticed (but not understood at the time) was a Maxwell observation involving hydrogen, half a century before full quantum mechanical theory arrived. Maxwell observed that the specific heat capacity of H2 unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in H2 because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect.[71] Antihydrogen (H) is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced as of 2015.[72][73] Natural occurrence Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and more than 90% by number of atoms. (Most of the mass of the universe, however, is not in the form of chemical-element type matter, but rather is postulated to occur as yet-undetected forms of mass such as dark matter and dark energy.[74]) This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction and the CNO cycle nuclear fusion.[75] Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite different from those of molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the Universe up to redshift z=4.[76] Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2. However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. However, hydrogen is the third most abundant element on the Earth's surface,[77] mostly in the form of chemical compounds such as hydrocarbons and water.[42] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance.[78] A molecular form called protonated molecular hydrogen (H+ ) is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This charged ion has also been observed in the upper atmosphere of the planet Jupiter. The ion is relatively stable in the environment of outer space due to the low temperature and density. H+ is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium.[79] Neutral triatomic hydrogen H3 can exist only in an excited form and is unstable.[80] By contrast, the positive hydrogen molecular ion (H+ ) is a rare molecule in the universe. Main article: Hydrogen production is produced in chemistry and biology laboratories, often as a by-product of other reactions; in industry for the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents in biochemical reactions. Steam reforming Hydrogen can be prepared in several different ways, but economically the most important processes involve removal of hydrogen from hydrocarbons, as about 95% of hydrogen production came from steam reforming around year 2000.[81] Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[82] At high temperatures (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H + H → CO + 3 H This reaction is favored at low pressures but is nonetheless conducted at high pressures (2.0  MPa, 20 atm or 600 inHg). This is because high-pressure H is the most marketable product and Pressure Swing Adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: → C + 2 H Consequently, steam reforming typically employs an excess of H . Additional hydrogen can be recovered from the steam by use of carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide:[82] CO + H + H Other important methods for H production include partial oxidation of hydrocarbons:[83] 2 CH + O → 2 CO + 4 H and the coal reaction, which can serve as a prelude to the shift reaction above:[82] C + H → CO + H Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia, hydrogen is generated from natural gas.[84] Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.[85] In the laboratory, H is usually prepared by the reaction of dilute non-oxidizing acids on some reactive metals such as zinc with Kipp's apparatus. Zn + 2 H+ + H Aluminium can also produce H upon treatment with bases: 2 Al + 6 H + 2 OH → 2 Al(OH) + 3 H The electrolysis of water is a simple method of producing hydrogen. A low voltage current is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert metals. (Iron, for instance, would oxidize, and thus decrease the amount of oxygen given off.) The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is in the range 80–94%.[86] 2 H (l) → 2 H (g) + O An alloy of aluminium and gallium in pellet form added to water can be used to generate hydrogen. The process also produces alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-used. This has important potential implications for a hydrogen economy, as hydrogen can be produced on-site and does not need to be transported.[87] There are more than 200 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity.[88] A number of laboratories (including in France, Germany, Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen from solar energy and water.[89] Anaerobic corrosion Under anaerobic conditions, iron and steel alloys are slowly oxidized by the protons of water concomitantly reduced in molecular hydrogen (H ). The anaerobic corrosion of iron leads first to the formation of ferrous hydroxide (green rust) and can be described by the following reaction: Fe + 2 H O → Fe(OH) + H In its turn, under anaerobic conditions, the ferrous hydroxide (Fe(OH) ) can be oxidized by the protons of water to form magnetite and molecular hydrogen. This process is described by the Schikorr reaction: 3 Fe(OH) + 2 H O + H ferrous hydroxide → magnetite + water + hydrogen The well crystallized magnetite (Fe ) is thermodynamically more stable than the ferrous hydroxide (Fe(OH) This process occurs during the anaerobic corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table. Geological occurrence: the serpentinization reaction In the absence of atmospheric oxygen (O ), in deep geological conditions prevailing far away from Earth atmosphere, hydrogen (H ) is produced during the process of serpentinization by the anaerobic oxidation by the water protons (H+) of the ferrous (Fe2+) silicate present in the crystal lattice of the fayalite (Fe , the olivine iron-endmember). The corresponding reaction leading to the formation of magnetite (Fe ), quartz (SiO ) and hydrogen (H ) is the following: + 2 H O → 2 Fe + 3 SiO + 3 H fayalite + water → magnetite + quartz + hydrogen This reaction closely resembles the Schikorr reaction observed in the anaerobic oxidation of the ferrous hydroxide in contact with water. Formation in transformers From all the fault gases formed in power transformers, hydrogen is the most common and is generated under most fault conditions; thus, formation of hydrogen is an early indication of serious problems in the transformer's life cycle.[90] Consumption in processes Large quantities of H are needed in the petroleum and chemical industries. The largest application of H is for the processing ("upgrading") of fossil fuels, and in the production of ammonia. The key consumers of H in the petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking. H has several other important uses. H is used as a hydrogenating agent, particularly in increasing the level of saturation of unsaturated fats and oils (found in items such as margarine), and in the production of methanol. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. H is also used as a reducing agent of metallic ores.[91] Hydrogen is highly soluble in many rare earth and transition metals[92] and is soluble in both nanocrystalline and amorphous metals.[93] Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice.[94] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals,[13] complicating the design of pipelines and storage tanks.[14] Apart from its use as a reactant, H has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding.[95][96] H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies.[97] Because H is lighter than air, having a little more than 114 of the density of air, it was once widely used as a lifting gas in balloons and airships.[98] In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries.[99] Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties.[100] Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions.[7] Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects.[101] Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs,[102] as an isotopic label in the biosciences,[56] and as a radiation source in luminous paints.[103] The triple point temperature of equilibrium hydrogen is a defining fixed point on the ITS-90 temperature scale at 13.8033 kelvins.[104] Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Energy carrier Hydrogen is not an energy resource,[105] except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development.[106] The Sun's energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth.[107] Elemental hydrogen from solar, biological, or electrical sources require more energy to make it than is obtained by burning it, so in these cases hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable.[105] The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is higher.[105] Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible future carrier of energy on an economy-wide scale.[108] For example, CO sequestration followed by carbon capture and storage could be conducted at the point of H production from fossil fuels.[109] Hydrogen used in transportation would burn relatively cleanly, with some NOx emissions,[110] but without carbon emissions.[109] However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial.[111] Fuel cells can convert hydrogen and oxygen directly to electricity more efficiently than internal combustion engines.[112] Semiconductor industry Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties.[113] It is also a potential electron donor in various oxide materials, including ZnO,[114][115] SnO2, CdO, MgO,[116] ZrO2, HfO2, La2O3, Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3.[117] Biological reactions H2 is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between H2 and its component two protons and two electrons. Creation of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water.[118] The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle.[119] Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast.[120] Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen.[121] Efforts have also been undertaken with genetically modified alga in a bioreactor.[122] Safety and precautions Main article: Hydrogen safety Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form.[123] In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids.[124] Hydrogen dissolves in many metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[125] leading to cracks and explosions.[126] Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns.[127] Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. Many physical and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Hydrogen detonation parameters, such as critical detonation pressure and temperature, strongly depend on the container geometry.[123] 1. ^ However, most of the universe's mass is not in the form of baryons or chemical elements. See dark matter and dark energy. 2. ^ 286 kJ/mol: energy per mole of the combustible material (molecular hydrogen) 1. ^ Simpson, J.A.; Weiner, E.S.C. (1989). "Hydrogen". Oxford English Dictionary. 7 (2nd ed.). Clarendon Press. ISBN 0-19-861219-2.  2. ^ Conventional Atomic Weights 2013. Commission on Isotopic Abundances and Atomic Weights 3. ^ Standard Atomic Weights 2013. Commission on Isotopic Abundances and Atomic Weights 4. ^ Wiberg, Egon; Wiberg, Nils; Holleman, Arnold Frederick (2001). Inorganic chemistry. Academic Press. p. 240. ISBN 0123526515.  5. ^ "Magnetic susceptibility of the elements and inorganic compounds" (PDF). CRC Handbook of Chemistry and Physics (81st ed.). CRC Press.  6. ^ a b c "Hydrogen". Van Nostrand's Encyclopedia of Chemistry. Wylie-Interscience. 2005. pp. 797–799. ISBN 0-471-61525-0.  7. ^ a b c d e f g h i j k l Emsley, John (2001). Nature's Building Blocks. Oxford: Oxford University Press. pp. 183–191. ISBN 0-19-850341-5.  8. ^ a b Stwertka, Albert (1996). A Guide to the Elements. Oxford University Press. pp. 16–21. ISBN 0-19-508083-1.  9. ^ Palmer, D. (13 September 1997). "Hydrogen in the Universe". NASA. Retrieved 5 February 2008.  10. ^ Laursen, S.; Chang, J.; Medlin, W.; Gürmen, N.; Fogler, H. S. (27 July 2004). "An extremely brief introduction to computational quantum chemistry". Molecular Modeling in Chemical Engineering. University of Michigan. Retrieved 4 May 2015.  11. ^ Presenter: Professor Jim Al-Khalili (21 January 2010). "Discovering the Elements". Chemistry: A Volatile History. 25:40 minutes in. BBC. BBC Four.  12. ^ "Hydrogen Basics — Production". Florida Solar Energy Center. 2007. Retrieved 5 February 2008.  13. ^ a b Rogers, H. C. (1999). "Hydrogen Embrittlement of Metals". Science. 159 (3819): 1057–1064. Bibcode:1968Sci...159.1057R. doi:10.1126/science.159.3819.1057. PMID 17775040.  14. ^ a b Christensen, C.H.; Nørskov, J. K.; Johannessen, T. (9 July 2005). "Making society independent of fossil fuels — Danish researchers reveal new technology". Technical University of Denmark. Retrieved 19 May 2015.  15. ^ "Dihydrogen". O=CHem Directory. University of Southern Maine. Retrieved 6 April 2009.  16. ^ Carcassi, M. N.; Fineschi, F. (2005). "Deflagrations of H2–air and CH4–air lean mixtures in a vented multi-compartment environment". Energy. 30 (8): 1439–1451. doi:10.1016/  17. ^ Committee on Alternatives and Strategies for Future Hydrogen Production and Use, US National Research Council, US National Academy of Engineering (2004). The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs. National Academies Press. p. 240. ISBN 0-309-09163-2.  18. ^ Patnaik, P. (2007). A Comprehensive Guide to the Hazardous Properties of Chemical Substances. Wiley-Interscience. p. 402. ISBN 0-471-71458-5.  19. ^ Schefer, E. W.; Kulatilaka, W. D.; Patterson, B. D.; Settersten, T. B. (June 2009). "Visible emission of hydrogen flames". Combustion and Flame. 156 (6): 1234–1241. doi:10.1016/j.combustflame.2009.01.011.  20. ^ Clayton, D. D. (2003). Handbook of Isotopes in the Cosmos: Hydrogen to Gallium. Cambridge University Press. ISBN 0-521-82381-1.  21. ^ NAAP Labs (2009). "Energy Levels". University of Nebraska Lincoln. Retrieved 20 May 2015.  22. ^ "photon wavelength 13.6 eV". Wolfram Alpha. 20 May 2015. Retrieved 20 May 2015.  23. ^ Stern, D.P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA Goddard Space Flight Center (mirror). Retrieved 20 December 2007.  24. ^ Stern, D. P. (13 February 2005). "Wave Mechanics". NASA Goddard Space Flight Center. Retrieved 16 April 2008.  25. ^ Staff (2003). "Hydrogen (H2) Properties, Uses, Applications: Hydrogen Gas and Liquid Hydrogen". Universal Industrial Gases, Inc. Retrieved 5 February 2008.  26. ^ Tikhonov, V. I.; Volkov, A. A. (2002). "Separation of Water into Its Ortho and Para Isomers". Science. 296 (5577): 2363. doi:10.1126/science.1069513. PMID 12089435.  27. ^ Hritz, J. (March 2006). "CH. 6 – Hydrogen" (PDF). NASA Glenn Research Center Glenn Safety Manual, Document GRC-MQSA.001. NASA. Retrieved 5 February 2008.  28. ^ Shinitzky, M.; Elitzur, A. C. (2006). "Ortho-para spin isomers of the protons in the methylene group". Chirality. 18 (9): 754–756. doi:10.1002/chir.20319. PMID 16856167.  29. ^ Milenko, Yu. Ya.; Sibileva, R. M.; Strzhemechny, M. A. (1997). "Natural ortho-para conversion rate in liquid and gaseous hydrogen". Journal of Low Temperature Physics. 107 (1–2): 77–92. Bibcode:1997JLTP..107...77M. doi:10.1007/BF02396837.  30. ^ Amos, Wade A. (1 November 1998). "Costs of Storing and Transporting Hydrogen" (PDF). National Renewable Energy Laboratory. pp. 6–9. Retrieved 19 May 2015.  31. ^ Svadlenak, R. E.; Scott, A. B. (1957). "The Conversion of Ortho- to Parahydrogen on Iron Oxide-Zinc Oxide Catalysts". Journal of the American Chemical Society. 79 (20): 5385–5388. doi:10.1021/ja01577a013.  32. ^ Clark, J. (2002). "The Acidity of the Hydrogen Halides". Chemguide. Retrieved 9 March 2008.  33. ^ Kimball, J. W. (7 August 2003). "Hydrogen". Kimball's Biology Pages. Retrieved 4 March 2008.  34. ^ IUPAC Compendium of Chemical Terminology, Electronic version, Hydrogen Bond 35. ^ Sandrock, G. (2 May 2002). "Metal-Hydrogen Systems". Sandia National Laboratories. Retrieved 23 March 2008.  36. ^ a b "Structure and Nomenclature of Hydrocarbons". Purdue University. Retrieved 23 March 2008.  37. ^ "Organic Chemistry". Lexico Publishing Group. 2008. Retrieved 23 March 2008.  38. ^ "Biochemistry". Lexico Publishing Group. 2008. Retrieved 23 March 2008.  39. ^ Moers, K. (1920). "Investigations on the Salt Character of Lithium Hydride". Zeitschrift für Anorganische und Allgemeine Chemie. 113 (191): 179–228. doi:10.1002/zaac.19201130116.  40. ^ Downs, A. J.; Pulham, C. R. (1994). "The hydrides of aluminium, gallium, indium, and thallium: a re-evaluation". Chemical Society Reviews. 23 (3): 175–184. doi:10.1039/CS9942300175.  41. ^ Hibbs, D. E.; Jones, C.; Smithies, N. A. (1999). "A remarkably stable indium trihydride complex: synthesis and characterisation of [InH3P(C6H11)3]". Chemical Communications (2): 185–186. doi:10.1039/a809279f.  42. ^ a b c Miessler, G. L.; Tarr, D. A. (2003). Inorganic Chemistry (3rd ed.). Prentice Hall. ISBN 0-13-035471-6.  43. ^ Okumura, A. M.; Yeh, L. I.; Myers, J. D.; Lee, Y. T. (1990). "Infrared spectra of the solvated hydronium ion: vibrational predissociation spectroscopy of mass-selected H3O+•(H2O)n•(H2)m". Journal of Physical Chemistry. 94 (9): 3416–3427. doi:10.1021/j100372a014.  44. ^ Perdoncin, G.; Scorrano, G. (1977). "Protonation Equilibria in Water at Several Temperatures of Alcohols, Ethers, Acetone, Dimethyl Sulfide, and Dimethyl Sulfoxide". Journal of the American Chemical Society. 99 (21): 6983–6986. doi:10.1021/ja00463a035.  45. ^ Carrington, A.; McNab, I. R. (1989). "The infrared predissociation spectrum of triatomic hydrogen cation (H3+)". Accounts of Chemical Research. 22 (6): 218–222. doi:10.1021/ar00162a004.  46. ^ Gurov, Y. B.; Aleshkin, D. V.; Behr, M. N.; Lapushkin, S. V.; Morokhov, P. V.; Pechkurov, V. A.; Poroshin, N. O.; Sandukovsky, V. G.; Tel'kushev, M. V.; Chernyshev, B. A.; Tschurenkova, T. D. (2004). "Spectroscopy of superheavy hydrogen isotopes in stopped-pion absorption by nuclei". Physics of Atomic Nuclei. 68 (3): 491–97. Bibcode:2005PAN....68..491G. doi:10.1134/1.1891200.  47. ^ Korsheninnikov, A.; Nikolskii, E.; Kuzmin, E.; Ozawa, A.; Morimoto, K.; Tokanai, F.; Kanungo, R.; Tanihata, I.; et al. (2003). "Experimental Evidence for the Existence of 7H and for a Specific Structure of 8He". Physical Review Letters. 90 (8): 082501. Bibcode:2003PhRvL..90h2501K. doi:10.1103/PhysRevLett.90.082501.  48. ^ Urey, H. C.; Brickwedde, F. G.; Murphy, G. M. (1933). "Names for the Hydrogen Isotopes". Science. 78 (2035): 602–603. Bibcode:1933Sci....78..602U. doi:10.1126/science.78.2035.602. PMID 17797765.  49. ^ Oda, Y.; Nakamura, H.; Yamazaki, T.; Nagayama, K.; Yoshida, M.; Kanaya, S.; Ikehara, M. (1992). "1H NMR studies of deuterated ribonuclease HI selectively labeled with protonated amino acids". Journal of Biomolecular NMR. 2 (2): 137–47. doi:10.1007/BF01875525. PMID 1330130.  50. ^ Broad, W. J. (11 November 1991). "Breakthrough in Nuclear Fusion Offers Hope for Power of Future". The New York Times. Retrieved 12 February 2008.  51. ^ Traub, R. J.; Jensen, J. A. (June 1995). "Tritium radioluminescent devices, Health and Safety Manual" (PDF). International Atomic Energy Agency. p. 2.4. Retrieved 20 May 2015.  52. ^ Staff (15 November 2007). "Tritium". U.S. Environmental Protection Agency. Retrieved 12 February 2008.  53. ^ Nave, C. R. (2006). "Deuterium-Tritium Fusion". HyperPhysics. Georgia State University. Retrieved 8 March 2008.  54. ^ Kendall, C.; Caldwell, E. (1998). "Fundamentals of Isotope Geochemistry". US Geological Survey. Retrieved 8 March 2008.  55. ^ "The Tritium Laboratory". University of Miami. 2008. Retrieved 8 March 2008.  56. ^ a b Holte, A. E.; Houck, M. A.; Collie, N. L. (2004). "Potential Role of Parasitism in the Evolution of Mutualism in Astigmatid Mites". Experimental and Applied Acarology. Lubbock: Texas Tech University. 25 (2): 97–107. doi:10.1023/A:1010655610575.  57. ^ van der Krogt, P. (5 May 2005). "Hydrogen". Elementymology & Elements Multidict. Retrieved 20 December 2010.  58. ^ § IR-3.3.2, Provisional Recommendations, Nomenclature of Inorganic Chemistry, Chemical Nomenclature and Structure Representation Division, IUPAC. Accessed on line 3 October 2007. 59. ^ Boyle, R. (1672). "Tracts written by the Honourable Robert Boyle containing new experiments, touching the relation betwixt flame and air..." London. 60. ^ Winter, M. (2007). "Hydrogen: historical information". WebElements Ltd. Retrieved 5 February 2008.  61. ^ Musgrave, A. (1976). "Why did oxygen supplant phlogiston? Research programmes in the Chemical Revolution". In Howson, C. Method and appraisal in the physical sciences. The Critical Background to Modern Science, 1800–1905. Cambridge University Press. Retrieved 22 October 2011.  62. ^ Cavendish, Henry (12 May 1766). "Three Papers, Containing Experiments on Factitious Air, by the Hon. Henry Cavendish, F. R. S.". Philosophical Transactions. The Royal Society. 56: 141–184. Retrieved 20 May 2015.  63. ^ National Electrical Manufacturers Association (1946). A chronological history of electrical development from 600 B.C. p. 102.  64. ^ "NTS-2 Nickel-Hydrogen Battery Performance 31". Retrieved 6 April 2009.  65. ^ Jannette, A. G.; Hojnicki, J. S.; McKissock, D. B.; Fincannon, J.; Kerslake, T. W.; Rodriguez, C. D. (July 2002). Validation of international space station electrical performance model via on-orbit telemetry (PDF). IECEC '02. 2002 37th Intersociety Energy Conversion Engineering Conference, 2002. pp. 45–50. doi:10.1109/IECEC.2002.1391972. ISBN 0-7803-7296-4. Retrieved 11 November 2011.  66. ^ Anderson, P. M.; Coyne, J. W. (2002). "A lightweight high reliability single battery power system for interplanetary spacecraft". Aerospace Conference Proceedings. 5: 5–2433. doi:10.1109/AERO.2002.1035418. ISBN 0-7803-7231-X.  67. ^ "Mars Global Surveyor". Retrieved 6 April 2009.  68. ^ Lori Tyahla, ed. (7 May 2009). "Hubble servicing mission 4 essentials". NASA. Retrieved 19 May 2015.  69. ^ Hendrix, Susan (25 November 2008). Lori Tyahla, ed. "Extending Hubble's mission life with new batteries". NASA. Retrieved 19 May 2015.  70. ^ Crepeau, R. (1 January 2006). Niels Bohr: The Atomic Model. Great Scientific Minds. Great Neck Publishing. ISBN 1-4298-0723-7.  71. ^ Berman, R.; Cooke, A. H.; Hill, R. W. (1956). "Cryogenics". Annual Review of Physical Chemistry. 7: 1–20. Bibcode:1956ARPC....7....1B. doi:10.1146/annurev.pc.07.100156.000245.  72. ^ Charlton, Mike; Van Der Werf, Dirk Peter (1 March 2015). "Advances in antihydrogen physics". Science Progress. 98 (1): 34–62. doi:10.3184/003685015X14234978376369.  73. ^ Kellerbauer, Alban (29 January 2015). "Why Antimatter Matters". European Review. 23 (01): 45–56. doi:10.1017/S1062798714000532.  74. ^ Gagnon, S. "Hydrogen". Jefferson Lab. Retrieved 5 February 2008.  75. ^ Haubold, H.; Mathai, A. M. (15 November 2007). "Solar Thermonuclear Energy Generation". Columbia University. Retrieved 12 February 2008.  76. ^ Storrie-Lombardi, L. J.; Wolfe, A. M. (2000). "Surveys for z > 3 Damped Lyman-alpha Absorption Systems: the Evolution of Neutral Gas". Astrophysical Journal. 543 (2): 552–576. arXiv:astro-ph/0006044free to read. Bibcode:2000ApJ...543..552S. doi:10.1086/317138.  77. ^ Dresselhaus, M.; et al. (15 May 2003). "Basic Research Needs for the Hydrogen Economy" (PDF). Argonne National Laboratory, U.S. Department of Energy, Office of Science Laboratory. Retrieved 5 February 2008.  78. ^ Berger, W. H. (15 November 2007). "The Future of Methane". University of California, San Diego. Retrieved 12 February 2008.  79. ^ McCall Group; Oka Group (22 April 2005). "H3+ Resource Center". Universities of Illinois and Chicago. Retrieved 5 February 2008.  80. ^ Helm, H.; et al. "Coupling of Bound States to Continuum States in Neutral Triatomic Hydrogen" (PDF). Department of Molecular and Optical Physics, University of Freiburg, Germany. Retrieved 25 November 2009.  81. ^ Ogden, J.M. (1999). "Prospects for building a hydrogen energy infrastructure". Annual Review of Energy and the Environment. 24: 227–279. doi:10.1146/  82. ^ a b c Oxtoby, D. W. (2002). Principles of Modern Chemistry (5th ed.). Thomson Brooks/Cole. ISBN 0-03-035373-4.  83. ^ "Hydrogen Properties, Uses, Applications". Universal Industrial Gases, Inc. 2007. Retrieved 11 March 2008.  84. ^ Funderburg, E. (2008). "Why Are Nitrogen Prices So High?". The Samuel Roberts Noble Foundation. Retrieved 11 March 2008.  85. ^ Lees, A. (2007). "Chemicals from salt". BBC. Archived from the original on 26 October 2007. Retrieved 11 March 2008.  86. ^ Kruse, B.; Grinna, S.; Buch, C. (2002). "Hydrogen Status og Muligheter" (PDF). Bellona. Retrieved 12 February 2008.  87. ^ Venere, E. (15 May 2007). "New process generates hydrogen from aluminum alloy to run engines, fuel cells". Purdue University. Retrieved 5 February 2008.  88. ^ Weimer, Al (25 May 2005). "Development of solar-powered thermochemical production of hydrogen from water" (PDF). Solar Thermochemical Hydrogen Generation Project.  89. ^ Perret, R. "Development of Solar-Powered Thermochemical Production of Hydrogen from Water, DOE Hydrogen Program, 2007" (PDF). Retrieved 17 May 2008.  90. ^ Hirschler, M. M. (2000). Electrical Insulating Materials: International Issues. ASTM International. pp. 89–. ISBN 978-0-8031-2613-8. Retrieved 13 July 2012.  91. ^ Chemistry Operations (15 December 2003). "Hydrogen". Los Alamos National Laboratory. Retrieved 5 February 2008.  92. ^ Takeshita, T.; Wallace, W. E.; Craig, R. S. (1974). "Hydrogen solubility in 1:5 compounds between yttrium or thorium and nickel or cobalt". Inorganic Chemistry. 13 (9): 2282–2283. doi:10.1021/ic50139a050.  93. ^ Kirchheim, R.; Mutschele, T.; Kieninger, W.; Gleiter, H.; Birringer, R.; Koble, T. (1988). "Hydrogen in amorphous and nanocrystalline metals". Materials Science and Engineering. 99: 457–462. doi:10.1016/0025-5416(88)90377-1.  94. ^ Kirchheim, R. (1988). "Hydrogen solubility and diffusivity in defective and amorphous metals". Progress in Materials Science. 32 (4): 262–325. doi:10.1016/0079-6425(88)90010-2.  95. ^ Durgutlu, A. (2003). "Experimental investigation of the effect of hydrogen in argon as a shielding gas on TIG welding of austenitic stainless steel". Materials & Design. 25 (1): 19–23. doi:10.1016/j.matdes.2003.07.004.  96. ^ "Atomic Hydrogen Welding". Specialty Welds. 2007. Archived from the original on 16 July 2011.  97. ^ Hardy, W. N. (2003). "From H2 to cryogenic H masers to HiTc superconductors: An unlikely but rewarding path". Physica C: Superconductivity. 388–389: 1–6. Bibcode:2003PhyC..388....1H. doi:10.1016/S0921-4534(02)02591-1.  98. ^ Almqvist, Ebbe (2003). History of industrial gases. New York, N.Y.: Kluwer Academic/Plenum Publishers. pp. 47–56. ISBN 0306472775. Retrieved 20 May 2015.  99. ^ Block, M. (3 September 2004). Hydrogen as Tracer Gas for Leak Detection. 16th WCNDT 2004. Montreal, Canada: Sensistor Technologies. Retrieved 25 March 2008.  100. ^ "Report from the Commission on Dietary Food Additive Intake" (PDF). European Union. Retrieved 5 February 2008.  101. ^ Reinsch, J.; Katz, A.; Wean, J.; Aprahamian, G.; MacFarland, J. T. (1980). "The deuterium isotope effect upon the reaction of fatty acyl-CoA dehydrogenase and butyryl-CoA". J. Biol. Chem. 255 (19): 9093–97. PMID 7410413.  102. ^ Bergeron, K. D. (2004). "The Death of no-dual-use". Bulletin of the Atomic Scientists. Educational Foundation for Nuclear Science, Inc. 60 (1): 15. doi:10.2968/060001004.  103. ^ Quigg, C. T. (March 1984). "Tritium Warning". Bulletin of the Atomic Scientists. 40 (3): 56–57.  104. ^ International Temperature Scale of 1990 (PDF). Procès-Verbaux du Comité International des Poids et Mesures. 1989. pp. T23–T42. Retrieved 25 March 2008.  105. ^ a b c McCarthy, J. (31 December 1995). "Hydrogen". Stanford University. Retrieved 14 March 2008.  106. ^ "Nuclear Fusion Power". World Nuclear Association. May 2007. Retrieved 16 March 2008.  107. ^ "Chapter 13: Nuclear Energy — Fission and Fusion". Energy Story. California Energy Commission. 2006. Retrieved 14 March 2008.  108. ^ "DOE Seeks Applicants for Solicitation on the Employment Effects of a Transition to a Hydrogen Economy". Hydrogen Program (Press release). US Department of Energy. 22 March 2006. Archived from the original on 19 July 2011. Retrieved 16 March 2008.  109. ^ a b "Carbon Capture Strategy Could Lead to Emission-Free Cars" (Press release). Georgia Tech. 11 February 2008. Retrieved 16 March 2008.  110. ^ Heffel, J. W. (2002). "NOx emission and performance data for a hydrogen fueled internal combustion engine at 1500 rpm using exhaust gas recirculation". International Journal of Hydrogen Energy. 28 (8): 901–908. doi:10.1016/S0360-3199(02)00157-X.  111. ^ Romm, J. J. (2004). The Hype About Hydrogen: Fact And Fiction In The Race To Save The Climate (1st ed.). Island Press. ISBN 1-55963-703-X.  112. ^ Garbak, John (2011). "VIII.0 Technology Validation Sub-Program Overview" (PDF). DOE Fuel Cell Technologies Program, FY 2010 Annual Progress Report. Retrieved 20 May 2015.  113. ^ Le Comber, P. G.; Jones, D. I.; Spear, W. E. (1977). "Hall effect and impurity conduction in substitutionally doped amorphous silicon". Philosophical Magazine. 35 (5): 1173–1187. Bibcode:1977PMag...35.1173C. doi:10.1080/14786437708232943.  114. ^ Van de Walle, C.G. (2000). "Hydrogen as a cause of doping in zinc oxide". Physical Review Letters. 85 (5): 1012–1015. Bibcode:2000PhRvL..85.1012V. doi:10.1103/PhysRevLett.85.1012. PMID 10991462.  115. ^ Janotti, A.; Van De Walle, C.G. (2007). "Hydrogen multicentre bonds". Nature Materials. 6 (1): 44–47. Bibcode:2007NatMa...6...44J. doi:10.1038/nmat1795. PMID 17143265.  116. ^ Kilic, C.; Zunger, Alex (2002). "n-type doping of oxides by hydrogen". Applied Physics Letters. 81 (1): 73–75. Bibcode:2002ApPhL..81...73K. doi:10.1063/1.1482783.  117. ^ Peacock, P. W.; Robertson, J. (2003). "Behavior of hydrogen in high dielectric constant oxide gate insulators". Applied Physics Letters. 83 (10): 2025–2027. Bibcode:2003ApPhL..83.2025P. doi:10.1063/1.1609245.  118. ^ Cammack, R.; Robson, R. L. (2001). Hydrogen as a Fuel: Learning from Nature. Taylor & Francis Ltd. pp. 202–203. ISBN 0-415-24242-8.  119. ^ Rhee, T. S.; Brenninkmeijer, C. A. M.; Röckmann, T. (19 May 2006). "The overwhelming role of soils in the global atmospheric hydrogen cycle". Atmospheric Chemistry and Physics. 6 (6): 1611–1625. doi:10.5194/acp-6-1611-2006. Retrieved 20 May 2015.  120. ^ Kruse, O.; Rupprecht, J.; Bader, K.; Thomas-Hall, S.; Schenk, P. M.; Finazzi, G.; Hankamer, B. (2005). "Improved photobiological H2 production in engineered green algal cells". The Journal of Biological Chemistry. 280 (40): 34170–7. doi:10.1074/jbc.M503840200. PMID 16100118.  121. ^ Smith, Hamilton O.; Xu, Qing (2005). "IV.E.6 Hydrogen from Water in a Novel Recombinant Oxygen-Tolerant Cyanobacteria System" (PDF). FY2005 Progress Report. United States Department of Energy. Retrieved 6 August 2016.  122. ^ Williams, C. (24 February 2006). "Pond life: the future of energy". Science. The Register. Retrieved 24 March 2008.  123. ^ a b Brown, W. J.; et al. (1997). "Safety Standard for Hydrogen and Hydrogen Systems" (PDF). NASA. Retrieved 5 February 2008.  124. ^ "Liquid Hydrogen MSDS" (PDF). Praxair, Inc. September 2004. Retrieved 16 April 2008.  125. ^ "'Bugs' and hydrogen embrittlement". Science News. Washington, D.C. 128 (3): 41. 20 July 1985. doi:10.2307/3970088. JSTOR 3970088.  126. ^ Hayes, B. "Union Oil Amine Absorber Tower". TWI. Retrieved 29 January 2010.  127. ^ Walker, James L.; Waltrip, John S.; Zanker, Adam (1988). John J. McKetta; William Aaron Cunningham, eds. Lactic acid to magnesium supply-demand relationships. Encyclopedia of Chemical Processing and Design. 28. New York: Dekker. p. 186. ISBN 082472478X. Retrieved 20 May 2015.  Further reading • Chart of the Nuclides (17th ed.). Knolls Atomic Power Laboratory. 2010. ISBN 978-0-9843653-0-2.  • Ferreira-Aparicio, P; Benito, M. J.; Sanz, J. L. (2005). "New Trends in Reforming Technologies: from Hydrogen Industrial Plants to Multifuel Microreformers". Catalysis Reviews. 47 (4): 491–588. doi:10.1080/01614940500364958.  • Newton, David E. (1994). The Chemical Elements. New York: Franklin Watts. ISBN 0-531-12501-7.  • Rigden, John S. (2002). Hydrogen: The Essential Element. Cambridge, Massachusetts: Harvard University Press. ISBN 0-531-12501-7.  • Romm, Joseph, J. (2004). The Hype about Hydrogen, Fact and Fiction in the Race to Save the Climate. Island Press. ISBN 1-55963-703-X.  • Scerri, Eric (2007). The Periodic System, Its Story and Its Significance. New York: Oxford University Press. ISBN 0-19-530573-6.  External links Listen to this article (2 parts) · (info) Part 1 • Part 2 This audio file was created from a revision of the "Hydrogen" article dated 2006-10-28, and does not reflect subsequent edits to the article. (Audio help) More spoken articles
f79564053088c8e8
Take the 2-minute tour × I saw this video of the double slit experiment by Dr. Quantum on youtube. Later in the video he says, the behavior of the electrons changes to produce double bars effect as if it knows that it is being watched or observed. What does that mean? How is that even possible? An atom knows if it is being watched? Seriously? Probably, likely are the chances that I dint understand the video? share|improve this question migrated from philosophy.stackexchange.com Nov 8 '11 at 21:14 That video has prompted questions here before. The first half of it is pretty standard explanation of quantum mechanics for laypeople, but at some point in veers off into new age woo and silly quantum mysticism. The basic answer is that QM describes the way the universe works very accurately. It is futile to assign wacky philosophical explanations to it. The universe will do what the universe will do, and QM is simply a description of it's behavior. –  Colin K Nov 8 '11 at 22:11 Don't let Dr Quantum touch you there... He's not a real doctor. –  Mikhail Nov 9 '11 at 3:51 remember what Dr. Feynman said about QM...If you think you understand QM, then you didn't understand it! –  Vineet Menon Nov 9 '11 at 4:47 2 Answers 2 up vote 5 down vote accepted Before I attempt to answer your question it is necessary to cover some basic background, you must also forgive the length but you raise some very interesting question: There are two things that govern the evolution of a Quantum Mechanical (QM) system (For All Practical Purposes (FAPP) the election and the double-slit/Youngs apparatus you mention I will take to be a purely QM system), the time evolution of the system (governed by the Schrödinger equation) which we will denote as $\mathbf{U}$ and the State Vector Reduction or Collapse of the Wave Function $\mathbf{R}$. The Schrödinger equation describes the unitary/time evolution of the wave function or quantum state of a particle which here we will denote as $\mathbf{U}$. This evolution is well defined and provides information on the evolution of the quantum state of a system. The quantum state itself, expresses the entire weighted sum of all the possible alternatives (complex number weighting factors) that are open to the system. Due to the nature of the complex probabilities, it is possible for a QM system, like your electron traveling through the Youngs apparatus, to be in a complex superposition of multiple states (or to put it another way, be in a mixture of possible states/outcomes that the given system will allow). For your system lets assume for simplicity that there are two states $|T\rangle$ the state associated with the electron going through the [T]op ‘slit’, and $|B\rangle$ the state associated with the electron passing through the bottom ‘slit’ (for simplicity we will ignore the phase factors associated with the QM states. See here for more information about the phase factor associated with Quantum States). So, just before the electron strikes the wall it is in a superposition of states $\alpha|T\rangle + \beta|B\rangle$, where $\alpha$ and $\beta$ are complex number probabilities that represent the likely hood of the particle being in the respective states. Now, in order to determine which path/’slit’ the electron actually took (either $|T\rangle$ or $|B\rangle$) we have to make some kind of ‘observation’/measurement (as was pointed out above). This measurement is what causes process $\mathbf{R}$ to occur and subsequently the collapse of the wave function which force the superposition of states $\alpha|T\rangle + \beta|B\rangle$ to become either state $|T\rangle$ OR $|B\rangle$. It is this QM state reduction or wave function collapse caused by process $\mathbf{R}$ that invokes all the mystery and the very strange nature of QM. There are numerous paradoxes (EPR-Paradox, Schrödinger’s cat etc. see here for an overview and some background) that stem from this measurement procedure/problem. At this point I can now address your questions: “What does that mean? How is that even possible? An atom knows if it is being watched? Seriously? Probably, likely are the chances that I didn’t understand the video?” So it is the process $\mathbf{R}$ that causes this issue so you are right to ask what does it mean when someone says “it knows that it is being observed”. To answer the above I will ask one of my own questions: “Is $\mathbf{R}$ a real process?”. I ask this because there are two ways of viewing $\mathbf{R}$. Some physicists view the collapse of the wave function and the quantum superpositions of complex probabilities (the use of state vectors) as real physical properties, others do not (even Dirac, Einstein and Schrodinger himself, did not take the probabilistic view of QM as serious view of what was actually happening in reality, rather they took it as a mathematical formalism that allowed these physical processes to be predicted). If you are to deem the state vector as a real entity then you must accept the consequential blur between what happens at the quantum level and what happens at the macroscopic/large scale level. This leads to the Feynman’s multiple history view of QM where all of the possible outcomes of a QM system occur and this itself leads to the “Many-World” interpretations of QM. I for one (along with the like of Penrose, Einstein etc.) believe the current picture of QM is not complete and that there is some physical process causing the collapse of the wave function. The wave function collapse is what causes the electron to choose a QM state, and the act of observation/measurement does seem to cause this collapse. However, this give rise to the question “Is it the act of human observation/consciousness that causes this collapse?”. It is impossible to argue this is the case. To go into more depth I will have to bring in the idea of quantum entanglements, which is essentially what was described above as a superposition of two QM states. These entanglements are what “collapse” when observations/measurements are made and are what constitute $\mathbf{R}$. So the real question is what causes dis-entanglement of two superposed states. There are some very interesting theories that postulate that the state vector reduction is gravitationally reduced and not the act of any observation. These ideas also have a bearing on the question of human consciousness! These details and in depth discussion on this subject can be found in the very accessible book: “Shadows of the Mind” by Roger Penrose. I hope this was of some help. share|improve this answer The video shows that the interference pattern goes away when one tries to measure which slit the electron went through. The point is that in order to measure which slit the electron went through, one must disturb the electron (shoot some light at it, for example). And amazingly, this interaction is enough to destroy the interference pattern. In some sense, though there is still some mystery about this. One says that the measurement (which implies an interaction) collapses the wave function (which describes the electron motion). The double slit experiment is a good place to start to get into the strange world of quantum mechanics! share|improve this answer simple and clear....I guess the narrator wanted to convey the simple fact about uncertainty principle! –  Vineet Menon Nov 9 '11 at 4:48 Your Answer
51eb3d8608746955
A Deleuzean Move June 24, 2012 § Leave a comment It is probably one of the main surprises in the course of growing up as a human that in the experience of consciousness we may meet things like unresolvable contradictions, thoughts that are incommensurable, thoughts that lead into contradictions or paradoxes, or thoughts that point to something which is outside of the possibility of empirical, so to speak “direct” experience. All these experiences form a particular class of experience. For one or the other reason, these issues are issues of mental itself. We definitely have to investigate them, if we are going to talk about things like machine-based episteme, or the urban condition, which will be the topic of the next few essays. There have been only very few philosophers1 who have been embracing paradoxicality without getting caught by antinomies and paradoxes in one or another way.2 Just to be clear: Getting caught by paradoxes is quite easy. For instance, by violating the validity of the language game you have been choosing. Or by neglecting virtuality. The first of these avenues into persistent states of worries can be observed in sciences and mathematics3, while the second one is more abundant in philosophy. Fortunately, playing with paradoxicality without getting trapped by paradoxes is not too difficult either. There is even an incentive to do so. Without paradoxicality it is not possible to think about beginnings, as opposed to origins. Origins­­—understood as points of {conceptual, historical, factual} departure—are set for theological, religious or mystical reasons, which by definition are always considered as bearer of sufficient reason. To phrase it more accurately, the particular difficulty consists in talking about beginnings as part of an open evolution without universal absoluteness, hence also without the need for justification at any time. Yet, paradoxicality, the differential of actual paradoxes, could form stable paradoxes only if possibility is mixed up with potentiality, as it is for instance the case for perspectives that could be characterised as reductionist or positivist. Paradoxes exist strictly only within that conflation of possibility and potentiality. Hence, if a paradox or antinomy seems to be stable, one always can find an implied primacy of negativity in lieu of the problematic field spawned and spanned by the differential. We thus can observe the pouring of paradoxes also if the differential is rejected or neglected, as in Derrida’s approach, or the related functionalist-formalist ethics of the Frankfurt School, namely that proposed by Habermas [4]. Paradoxes are like knots that always can be untangled in higher dimensions. Yet, this does NOT mean that everything could be smoothly tiled without frictions, gaps or contradictions. Embracing the paradoxical thus means to deny the linear, to reject the origin and the absolute, the centre points [6] and the universal. We may perceive remote greetings from Nietzsche here4. Perhaps, you already may have classified the contextual roots of these hints: It is Gilles Deleuze to whom we refer here and who may well be regarded as the first philosopher of open evolution, the first one who rejected idealism without sacrificing the Idea.5 In the hands of Deleuze—or should we say minds?—paradoxicality does neither actualize into paradoxes nor into idealistic dichotomic dialectics. A structural(ist) and genetic dynamism first synthesizes the Idea, and by virtue of the Idea as well as the space and time immanent to the Idea paradoxicality turns productive.7 Philosophy is revealed not by good sense but by paradox. Paradox is the pathos or the passion of philosophy. There are several kinds of paradox, all of which are opposed to the complementary forms of orthodoxy – namely, good sense and common sense. […] paradox displays the element which cannot be totalised within a common element, along with the difference which cannot be equalised or cancelled at the direction of a good sense. (DR227) As our title already indicates, we not only presuppose and start with some main positions and concepts of Deleuzean philosophy, particularly those he once developed in Difference and Repetition (D&R)8. There will be more details later9. We10 also attempt to contribute some “genuine” aspects to it. In some way, our attempt could be conceived as a development being an alternative to part V in D&R, entitled “Asymmetrical Synthesis of the Sensible”. This Essay Throughout the collection of essays about the “Putnam Program” on this site we expressed our conviction that future information technology demands for an assimilation of philosophy by the domain of computer sciences (e.g. see the superb book by David Blair “Wittgenstein, Language and Information” [47]). There are a number of areas—of both technical as well as societal or philosophical relevance—which give rise to questions that already started to become graspable, not just in the computer sciences. How to organize the revision of beliefs?11 What is the structure of the “symbol grounding problem”? How to address it? Or how to avoid the fallacy of symbolism?12 Obviously we can’t tackle such questions without the literacy about concepts like belief or symbol, which of course can’t be reduced to a merely technical notion. Beliefs, for instance, can’t be reduced to uncertainty or its treatment, despite there is already some tradition in analytical philosophy, computer sciences or statistics to do so. Else, with the advent of emergent mental capabilities in machines ethical challenges appear. These challenges are on both sides of the coin. They relate to the engineers who are creating such instances as well as to lawyers who—on the other side of the spectrum—have to deal with the effects and the properties of such entities, and even “users” that have to build some “theory of mind” about them, some kind of folk psychology. And last but not least, just the externalization of informational habits into machinal contexts triggers often pseudo-problems and “deep” confusion.13 Examples for such confusion are the question about the borders of humanity, i.e. as kind of a defense war fought by anthropology, or the issue of artificiality. Where does the machine end and where does the domain of the human start? How can we speak reasonably about “artificiality”, if our brain/mind remains still dramatically non-understood and thus implicitly is conceived by many as kind of a bewildering nature? And finally, how to deal with technological progress: When will computer scientists need self-imposed guidelines similar to those geneticists ratified for their community in 1974 during the Asimolar Conferences? Or are such guidelines illusionary or misplaced, because we are weaving ourselves so intensively into our new informational carpets—made from multi- or even meta-purpose devices—that are righteous flying carpets? There is also a clearly recognizable methodological reason  for bringing the inventioneering of advanced informational “machines” and philosophy closer together. The domain of machines with advanced mental capabilities—I deliberately avoid the traditional term of “artificial intelligence”—, let us abbreviate it MMC, acquires ethical weight in itself. MMC establishes a subjective Lebenswelt (life form) that is strikingly different from ours and which we can’t understand analytically any more (if at all)14. The challenge then is how to talk about this domain? We should not repeat the same fallacy as anthropology and anthropological philosophy have been committing since Kant, where human measures have been applied (and still are up today) to “nature”. If we are going to compare two different entities we need a differential position from which both can be instantiated. Note that no resemblance can be expected between the instances, nor between the instances and the differential. That differential is a concept, or an idea, and as such it can’t be addressed by any kind of technical perspective. Hence, questions of mode of speaking can’t be conceived as a technical problem, especially not for the domain of MMC, also due to the implied self-referentiality of the mental itself. Taken together, we may say that our motivation follows two lines. Firstly, the concern is about the problematic field, the problem space itself, about the possibility that problems could become visible at all. Secondly, there is a methodological position characterisable as a differential that is necessary to talk about the subject of incommensurable that are equipped entities with mental capacities.15 Both directions and all related problems can be addressed in the same single move, so at least is our proposal. The goal of this essay is the introduction and a brief discussion of a still emerging conceptual structure that may be used as an image of thought, or likewise as a tool in the sense of an almost formal mental procedure, helping to avoid worries about the diagnosis—or supporting it—of the challenges opened by the new technologies. Of course, it will turn out that the result is not just applicable to the domain of philosophy of technology. In the following we will introduce a unique structure that has been inspired not only from heterogeneous philosophical sources. Those stretch from Aristotle to Peirce, from Spinoza to Wittgenstein, and from Nietzsche to Deleuze, to name but a few, just to give you an impression what mindset you could expect. Another important source is mathematics, yet not used as a ready-made system for formal reasoning, but rather as a source for a certain way of thinking. Last, but not least, biology is contributing as the home of the organon, of complexity, of evolution, and, more formally, on self-referentiality. The structure we will propose as a starting point that appears merely technical, thus arbitrary, and at the same time it draws upon the primary amalgamate of the virtual and the immanent. Its paradoxicality consists in its potential to describe the “pure” any, the Idea that comprises any beginning. Its particular quality as opposed to any other paradoxicality is caused by a profound self-referentiality that simultaneously leads to its vanishing, its genesis and its own actualization. In this way, the proposed structure solves a challenge that is considered by many throughout the history of philosophy to be one of the most serious one. The challenge in question is that of sufficient reason, justification and conditionability. To be more precise, that challenge is not solved, it is more correct to say that it is dissolved, made disappear. In the end, the problem of sufficient reason will be marked as a pseudo-problem. Here, a small remark is necessary to be made to the reader. Finally, after some weeks of putting this down, it turned out as a matter of fact that any (more or less) intelligible way of describing the issues exceeds the classical size of a blog entry. After all, now it comprises approx. 150’000 characters (incl white space), which would amount to 42+ pages on paper. So, it is more like a monograph. Still, I feel that there are many important aspects left out. Nevertheless I hope that you enjoy reading it. The following provides you a table of content (active links) for the remainder of this essay: 2. Brief Methodological Remark As we already noted, the proposed structure is self-referential. Self-referentiality also means that all concepts and structures needed for an initial description will be justified by the working of the structure, in other words, by its immanence. Actually, similarly to the concept of the Idea in D&R, virtuality and immanence come very close to each other, they are set to be co-generative. As an Idea, the proposed structure is complete. As any other idea, it needs to be instantiated into performative contexts, thus it is to be conceived as an entirety, yet neither as a completeness nor as a totality. Yet, its self-referentiality allows for and actually also generates a “self-containment” that results in a fractal mirroring of itself, in a self-affine mapping. Metaphorically, it is a concept that develops like the leaf of a fern. Superficially, it could look like a complete and determinate entirety, but it is not, similar to area-covering curves in mathematics. Those fill a 2-dimensional area infinitesimally, yet, with regard to their production system they remain truly 1-dimensional. They are a fractal, an entity to which we can’t apply ordinal dimensionality. Such, our concept also develops into instances of fractal entirety. For these reasons, it would be also wrong to think that the structure we will describe in a moment is “analytical”, despite it is possible to describe its “frozen” form by means of references to mathematical concepts. Our structure must be understood as an entity that is not only not neutral or invariant against time. It forms its own sheafs of time (as I. Prigogine described it) Analytics is always blind against its generative milieu. Analytics can’t tell anything about the world, contrary to a widely exercised opinion. It is not really a surprise that Putnam recommended to reduce the concept of “analytic” to “an inexplicable noise”. Very basically it is a linear endeavor that necessarily excludes self-referentiality. Its starting point is always based on an explicit reference to kind of apparentness, or even revelation. Analytics not only presupposes a particular logic, but also conflates transcendental logic and practiced quasi-logic. Else, the pragmatics of analysis claims that it is free from constructive elements. All these characteristics do not apply to out proposal, which is as less “analytical” as the philosophy of Deleuze, where it starts to grow itself on the notion of the mathematical differential. 3. The Formal Structure For the initial description of the structure we first need a space of expressibility. This space then will be equipped with some properties. And right at the beginning I would like to emphasize that the proposed structure does not “explain” by itself anything, just like a (philosophical) grammar. Rather, through its usage, that is, its unfolding in time, it shows itself and provides a stable as well as a generative ground. The space of the structure is not a Cartesian space, where some concepts are mapped onto the orthogonal dimensions, or where concepts are thought to be represented by such dimensions. In a Cartesian space, the dimensions are independent from each other.16 Objects are represented by the linear and additive combination of values along those dimensions and thus their entirety gets broken up. We loose the object as a coherent object and there would be no way to regain it later, regardless the means and the tools we would apply. Hence the Cartesian space is not useful for our purposes. Unfortunately, all the current mathematics is based on the cartesian, analytic conception. Currently, mathematics is a science of control, or more precisely, a science about the arrangement of signs as far as it concerns linear, trivial machines that can be described analytically. There is not yet a mathematics of the organon. Probably category theory is a first step into its direction. Instead, we conceive our space as an aspectional space, as we introduced it in a previous chapter. In an aspectional space concepts are represented by “aspections” instead of “dimensions”. In contrast to the values in a dimensional space, values in an aspectional can not be changed independently from each other. More precisely, we always can keep only at most 1 aspection constant, while the values along all the others change simultaneously. (So-called ternary diagrams provide a distantly related example for this in a 2-dimensional space.) In other words, within the N-manifolds of the aspectional space always all values are dependent on each other. This aspectional space is stuffed with a hyperbolic topological structure. The space of our structure is not flat. You may take M.C. Escher’s plates as a visualization of such a space. Yet, our space is different from such a fixed space; it is a relativistic space that is built from overlapping hyperbolic spaces. At each point in the space you will find a point of reference (“origin”) for a single hyperbolic reference system. Our hyperbolic space is locally centred. A mathematical field about comparable structures would be differential topology. So far, the space is still quite easy and intuitively to understand. At least there is still a visualization possible for it. This changes probably with the next property. Points in this aspectional space are not “points”, or expressed in a better, less obscure way, our space does not contain points at all. In a Cartesian space, points are defined by one or more scales and their properties. For instance, in a x-y-coordinate system we could have real numbers on both dimensions, i.e. scales, or we could have integers on the first, and reals on the second one. The interaction of the number systems used to create a scale along a dimension determines the expressibility of the space. This way, a point is given as a fixed instance of a set of points as soon as the scale is given. Points themselves are thus said to be 0-dimensional. Our “points”, i.e. the content of our space is quite different from that. It is not “made up” from inert and passive points but the second differential, i.e. ultimately a procedure that invokes an instantiation. Our aspectional space thus is made from infinitesimal procedural sites, or “situs” as Leibniz probably would have said. If we would represent the physical space by a Cartesian dimensional system, then the second derivative would represent an acceleration. Take this as a metaphor for the behavior of our space. Yet, our space is not a space that is passive. The second-order differential makes it an active space and a space that demands for an activity. Without activity it is “not there”. We also could describe it as the mapping of the intensity of the dynamics of transformation. If you would try to point to a particular location, or situs, in that space, which is of course excluded by its formal definition, you would instantaneously “transported” or transformed, such that you would find yourself elsewhere instantaneously. Yet, this “elsewhere” can not be determined in Cartesian ways. First, because that other point does not exist, second, because it depends on the interaction of the subject’s contribution to the instantiation of the situs and the local properties of the space. Finally, we can say that our aspectional space thus is not representational, as the Cartesian space is. So, let us sum the elemental17 properties of our space of expressibility: • 1. The space is aspectional. • 2. The topology of the space is locally hyperbolic. • 3. The substance of the space is a second-order differential. 4. Mapping the Semantics We now are going to map four concepts onto this space. These concepts are themselves Ideas in the Deleuzean sense, but they are also transcendental. They are indeterminate and real, just as virtual entities. As those, we take the chosen concepts as inexplicable, yet also as instantiationable. These four concepts have been chosen initially in a hypothetical gesture, such that they satisfy two basic requirements. First, it should not be possible to reduce them to one another. Second, together they should allow to build a space of expressibility that would contain as much philosophical issues of mentality as possible. For instance, it should contain any aspect of epistemology or of languagability, but it does not aim to contribute to the theory of morality, i.e. ethics, despite the fact that there is, of course, significant overlapping. For instance, one of the possible goals could be to provide a space that allows to express the relation between semiotics and any logic, or between concepts and models. So, here are the four transcendental concepts that form the aspections of our space as described above: • – virtuality • – mediality • – model • – concept Inscribing four concepts into a flat, i.e. Euclidean aspectional space would result in a tetraedic space. In such a space, there would be “corners,” or points of inflections, which would represent the determinateness of the concepts mapped to the aspections. As we have emphasized above, our space is not flat, though. There is no static visualization possible for it, since our space can’t be mapped to the flat Euclidean space of a drawing, or of the space of our physical experience. So, let us proceed to the next level by resorting to the hyperbolic disc. If we take any two points inside the disc, their distance is determinate. Yet, if we take any two points at the border of the disc, the distance between those points is infinite from the inside perspective, i.e. for any perspective associated to a point within the disc. Also the distance from any point inside the disc to the border is infinite. This provides a good impression how transcendental concepts that by definition can’t be accessed “as such”, or as a thing, can be operationalized by the hyperbolic structure of a space. Our space is more complicated, though, as the space is not structured by a fixed hyperbolic topology that is, so to speak, global for the entire disc. The consequence is that our space does not have a border, but at the same time it remains an aspectional space. Turning the perspective around, we could say that the aspections are implied into this space. Let us now briefly visit these four concepts. 4.1. Virtuality Virtuality describes the property of “being virtual”. Saying that something is virtual does not mean that this something does not exist, despite the property “existing” can’t be applied to it either. It is fully real, but not actual. Virtuality is the condition of potentiality, and as such it is a transcendental concept. Deleuze repeatedly emphasises that virtuality does not refer to a possibility. In the context of information technologies it is often said that this or that is “virtual”, e.g. virtualized servers, or virtual worlds. This usage is not the same as in philosophy, since, quite obviously, we use the virtual server as a server, and the world dubbed “virtual“ indeed does exist in an actualized form. Yet, in both examples there is also some resonance to the philosophical concept of virtuality. But this virtuality is not exclusive to the simulated worlds, the informationally defined server instances or the WWW. Virtualization is, as we will see in a moment, implied by any kind of instance of mediality. As just said, virtuality and thus also potentiality must be strictly distinguished from possibility. Possible things, even if not yet present or existent, can be thought of in a quasi-material way, as if they would exist in their material form. We even can say that possible things and the possibilities of things are completely determined in any given moment. It is not possible to say so about potentiality. Yet, without the concept of potentiality we could not speak about open evolutionary processes. Neglecting virtuality thus is necessarily equivalent to the apriori claim of determinateness, which is methodologically and ethically highly problematic. The philosophical concept of virtuality is known since Aristotle. Recently, Bühlmann18 brought it to the vicinity of semiotics and the question of reference19 in her work about mediality. There would be much, much more to say about virtuality here, just, the space is missing… 4.2. Mediality Mediality, that is the medial aspects of things, facts and processes belongs to the most undervalued concepts nowadays, even as we get some exercise by means of so-called “social media”. That term perfectly puts this blind spot to stage through its emphasis: Neither is there any mediality without sociality, nor is there any sociality without mediality. Mediality is the concept that has been “discovered” as the last one of our small group. There is a growing body of publications, but many are—astonishingly—deeply infected by romanticism or positivism20, with only a few exceptions.21 Mediality comprises issues like context, density, or transformation qua transfer. Mediality is a concept that helps to focus the appropriate level of integration in populations or flows when talking about semantics or meaning and their dynamics. Any thing, whether material or immaterial, that occurs in a sufficient density in its manifoldness may develop a mediality within a sociality. Mediality as a “layer of transport” is co-generative to sociality. Media are never neutral with respect to the transported, albeit one can often find counteracting forces here. Signs and symbols could not exist as such without mediality. (Yet, this proposal is based on the primacy of interpretation, which is rejected by modernist set of beliefs. The costs for this are, however, tremendous, as we are going to argue here) The same is true for words and language as a whole. In real contexts, we usually find several, if not many medial layers. Of course, signs and symbols are not exhaustively described by mediality. They need reference, which is a compound that comprises modeling. 4.3. Model Models and modeling need not be explicated too much any more, as it is one of the main issues throughout our essays. We just would like to remember to the obvious fact that a “pure” model is not possible. We need symbols and rules, e.g. about their creation or usage, and necessarily both are not subject of the model itself. Most significantly, models need a purpose, a concept to which they refer. In fact, any model presupposes an environment, an embedding that is given by concepts and a particular social embedding. Additionally, models would not be models without virtuality. On the one hand, virtuality is implied by the fact that models are incarnations of specific modes of interpretation, and on the other hand they imply virtuality themselves, since they are, well, just models. We frequently mentioned that it is only through models that we can build up references to the external world. Of course, models are not sufficient to describe that referencing. There is also the contingency of the manifold of populations and the implied relations as quasi-material arrangements that contribute to the reference of the individual to the common. Yet, only modeling allows for anticipation and purposeful activity. It is only though models that behavior is possible, insofar any behavior is already differentiated behavior. Models are thus the major site where information is created. It is not just by chance that the 20th century experienced the abundance of models and of information as concepts. In mathematical terms, models can be conceived as second-order categories. More profane, but equivalent to that, we can say that models are arrangement of rules for transformation. This implies the whole issue of rule-following as it has been investigated and formulated by Wittgenstein. Note that rule-following itself is a site of paradoxicality. As there is no private language, there is also no private model. Philosophically, and a bit more abstract, we could describe them as the compound of providing the possibility for reference (they are one of the conditions for such) and the institutionalized site for creating (f)actual differences. 4.4. Concepts Concept is probably one of the most abused, or at least misunderstood concepts, at least in modern times. So-called Analytical Philosophy is claiming over and over again that concepts could be explicated unambiguously, that concepts could be clarified or defined. This way, the concept and its definition are equaled. Yet, a definition is just a definition, not a concept. The language game of the definition makes sense only in a tree of analytical proofs that started with axioms. Definitions need not to be interpreted. They are fully given by themselves. Such, the idea of clarifying a concept is nothing but an illusion. Deleuze writes (DR228) It is not surprising that, strictly speaking, difference should be ‘inexplicable’. Difference is explicated, but in systems in which it tends to be cancelled; this means only that difference is essentially implicated, that its being is implication. For difference, to be explicated is to be cancelled or to dispel the inequality which constitutes it. The formula according to which ‘to explicate is to identify’ is a tautology. Deleuze points to the particular “mechanism” of eradication by explication, which is equal to its transformation into the sayable. There is a difference between 5 and 7, but the arithmetic difference does not cover all aspects of difference. Yet, by explicating the difference using some rules, all the other differences except the arithmetic one vanish. Such, this inexplicability is not limited to the concept of difference. In some important way, these other aspects are much more interesting and important than the arithmetic operation itself or the result of it. Actually, we can understand differencing only as far we are aware of these other aspects. Elsewhere, we already cited Augustine and his remark about time:22 “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.” Here, we can observe at least two things. Firstly, this observation may well be the interpreted as the earliest rejection of “knowledge as justified belief”, a perspective which became popular in modernism. Meanwhile it has been proofed to be inadequate by the so-called Gettier problem. The consequences for the theory of data bases, or machine-based processing of data, can’t be underestimated. It clearly shows, that knowledge can’t be reduced to confirmed hypotheses qua validated models, and belief can’t be reduced to kind of a pre-knowledge. Belief must be something quite different. The second thing to observe by those two example concerns the status of interpretation. While Augustine seems to be somewhat desperate, at least for a moment23, analytical philosophy tries to abolish the annoyance of indeterminateness by killing the freedom inherent to interpretation, which always and inevitably happens, if the primacy of interpretation is denied. Of course, the observed indeterminateness is equally not limited to time either. Whenever you try to explicate a concept, whether you describe it or define it, you find the unsurmountable difficulty to pick one of many interpretations. Again: There is no private language; meaning, references and signs exist only within social situations of interpretation. In other words, we again find the necessity of invoking the other conceptual aspects from which we build our space. Without models and mediality there is no concept. And even more profound than models, concepts imply virtuality. In the opposite direction we can understand now that these four concepts are not only not reducible to each other. They are dependent on each other and—somewhat paradoxically—they are even competitively counteracting. From this we can expect an abstract dynamics that reminds somewhat to the patterns evolving in Reaction-Diffusion-Systems. These four concepts imply the possibility for a basic creativity in the realm of the Idea, in the indeterminate zone of actualisation that will result in a “concrete” thought, or at least the experience of thinking. Before we proceed we would like to introduce a notation that should be helpful in avoiding misunderstandings. Whenever we refer to the transcendental aspects between which the aspections of our space stretch out, we use capital letters and mark it additionally by a bar, such as “_Concept”,or “_Model”.The whole set of aspects we denote by “_A”,while its unspecified items are indicated by “_a”. 5. Anti-Ontology: The T-Bar-Theory The four conceptual aspects _Aplay different roles. They differ in the way they get activated. This becomes visible as soon as we use our space as a tool for comparing various kinds of mental concepts or activities, such as believing, referring, explicating or understanding. These we will inspect later in detail. Above we described the impossibility to explicate a concept without departing from the “conceptness”. Well, such a description is actually not appropriate according to our aspectional space. The four basic aspections are built by transcendental concepts. There is a subjective, imaginary yet pre-specific scale along those aspections. Hence, in our space “conceptness” is not a quality, but an intensity, or almost a degree, a quantity. The key point then is that a mental concept or activity relates always to all four transcendental aspections in such a way that the relative location of the mental activity can’t be changed along just a single aspect alone. We also can recognize another significant step that is provided by our space of expressibility. Traditionally, concepts are used as existential signifiers, in philosophy often called qualia. Such existential signifiers are only capable to indicate presence or absence, which thus is also confined to naive ontology of Hamletian style (to be or not to be). It is almost impossible to build a theory or a model from existential signifiers. From the modeling or the measurement theory point of view, concepts are on the binary scale. Despite concepts collect a multitude of such binary usages, appropriate modeling remains impossible due the binary scale, unless we would probabilize all potential dual pairs. Similarly to the case of logic we also have to distinguish the transcendental aspect _a,that is, the _Model,_Mediality,_Concept,and _Virtualityfrom the respective entity that we find in applications. Those practiced instances of a are just that: instances. That is: instances produced by orthoregulated habits. Yet, the instances of a that could be gained through the former’s actualization do not form singularities, or even qualia. Any a can be instantiated into an infinite diversity of concrete, i.e. definable and sayable abstract entities. That’s the reason for the kinship between probabilistic entities and transcendental perspectives. We could operationalize the latter by the former, even if we have to distinguish sharply between possibility and potentiality. Additionally we have to keep in mind that the concrete instances do not live independently from their transcendental ancestry24. Deleuze provides us a nice example of this dynamics in the beginning of part V in D&R. For him, “divergence” is an instance of the transcendental entity “Difference”. What he calls “phenomenon” we dubbed “instance”, which is probably more appropriate in order to avoid the reference to phenomenology and the related difficulties. Calling it “phenomenon” pretends—typically for any kind of phenomenology or ontology—sort of a deeply unjustified independence of mentality and its underlying physicality. This step from existential signifiers to the situs in a space for expressibility, made possible by our aspectional space, can’t be underestimated. Take for instance the infamous question that attracted so many misplaced answers: “How do words or concepts acquire reference?” This question appears to be especially troubling because signs do refer only to signs.25 In existential terms, and all the terms in that question are existential ones, this question can’t be answered, even not addressed at all. As a consequence, deep mystical chasms unnecessarily keep separating the world from the concepts. Any resulting puzzle is based on a misconception. Think of Platons chorismos (greek for “separation”) of explanation and description, which recently has been taken up, refreshed and declared being a “chasm” by Epperson [31] (a theist realist, according to his own positioning; we will meet him later again). The various misunderstandings are well-known, ranging from nominalism to externalist realism to scientific constructivism. They all vanish in a space that overcomes the existentiality embedded in the terms. Mathematically spoken, we have to represent words, concepts and references as probabilized entities, as quasi-species as Manfred Eigen called it in a different context, in order to avoid naive mysticism regarding our relations to the world. It seems that our space provides the possibility for measuring and comparing different ways of instantiation for _A,kind of a stable scale. We may use it to access concepts differentially, that is, we now are able to transform concepts in a space of quantitability (a term coined by Vera Bühlmann). The aspectional space as we have constructed it is thus necessary even in order to talk just about modeling. It would provide the possibility for theories about any transition between any mental entities one could think of. For instance, if we conceive “reference” as the virtue of purposeful activity and anticipation, we could explore and describe the conditions for the explication of the path between the _Modelon the one side and the _Concept on the other.On this path—which is open on both sides—we could, for instance, first meet different kinds of symbols near the Model, started by idealization and naming of models, followed by the mathematical attitude concerning the invention and treatment of signs, _Logicand all of its instances, semiosis and signs, words, and finally concepts, not forgetting above all that this path necessarily implies a particular dynamics regarding _Medialityand _Virtuality. Such an embedding of transformations into co-referential transcendental entities is anything we can expect to “know” reliably. That was the whole point of Kant. Well, here we can be more radical than Kant dared to. The choreostemic space is a rejection of the idea of “pure thought”, or pure reason, since such knowledge needs to undergo a double instantiation, and this brings subjectivity back. It is just a phantasm to believe that propositions could be secured up to “truth”. This is even true for least possible common denominator, existence. I think that we cannot know whether something exists or not (here, I pretend to understand the term exist), that it is meaningless to ask this. In this case, our analysis of the legitimacy of uses has to rest on something else. (David Blair [49]) Note that Blair is very careful in his wording here. He is not about any universality regarding the justification, or legitimization. His proposal is simply that any reference to “Being” or “Existence” is useless apriori. Claiming seriousness of ontology as an aspect of or even as an external reality immediately instantiates the claim of an external reality as such, which would be such-and-such irrespective to its interpretation. This, in turn, would consequently amount to a stance that would set the proof of irrelevance of interpretation and of interpretive relativism as a goal. Any familiar associations about that? Not to the least do physicists, but only physicists, speak of “laws” in nature. All of this is, of course, unholy nonsense, propaganda and ideology at least. As a matter of fact, even in a quite strict naturalist perspective, we need concepts and models. Those are obviously not part of the “external” nature. Ontology is an illusion, completely and in any of its references, leading to pseudo-problems that are indeed  “very difficult” to “solve”. Even if we manage to believe in “existence”, it remains a formless existence, or more precisely, it has to remain formless. Any ascription of form immediately would beat back as a denial of the primacy of interpretation, hence in a naturalist determinism. Before addressing the issue of the topological structure of our space, let us trace some other figures in our space. 6. Figures and Forms Whenever we explicate a concept we imply or refer to a model. In a more general perspective, this applies to virtuality and mediality as well. To give an example: Describing a belief does not mean to belief, but to apply a model. The question now is, how to revert the accretion of mental activities towards the _Model._Virtuality can’t be created deliberately, since in this case we would refer again to the concept of model. Speaking about something, that is, saying in the Wittgensteinian sense, intensifies the _Model. It is not too difficult, though, to find some candidate mechanics that turns the vector of mental activity away from the _Concept.It is through performance, mere action without explicable purpose, that we introduce new possibilities for interpretation and thus also enriched potential as the (still abstract) instance of _Virtuality. In contrast to that, the _Concept is implied.The _Conceptcan only be demonstrated. Even by modeling. Traveling on some path that is heading towards the _Model,the need for interpretation continuously grows, hence, the more we try to approach the “pure” _Model,the stronger is the force that will flip us back towards the _Concept. _Mediality,finally, the fourth of our aspects, binds its immaterial colleagues to matter, or quasi-matter, in processes that are based on the multiplicity of populations. It is through _Medialityand its instances that chunks of information start to behave as device, as quasi-material arrangement. The whole dynamics between _Conceptsand _Modelsrequires a symbol system, which can evolve only through the reference to _Mediality,which in turn is implied by populations of processes. Above we said that the motivation for this structure is to provide a space of expressibility for mental phenomena in their entirety. Mental activity does not consist of isolated, rare events. It is an multitude of flows integrated into various organizational levels, even if we would consider only the language part. Mapping these flows into our space rises the question whether we could distinguish different attractors, different forms of recurrence. Addressing this question establishes an interesting configuration, since we are talking about the form of mental activities. Perhaps it is also appropriate to call these forms “mental style”. In any case, we may take our space as a tool to formalize the question about potential classes of mental styles. In order to render out space more accessible, we take the tetraedic body as a (crude) approximating metaphor for it. Above we stressed the point that any explication intensifies the _Model aspect. Transposed into a Cartesian geometry we would have said—metaphori- cally—that explication moves us towards the corner of the model. Let us stick to this primitive representation for a moment and in favour of a more intuitive understanding. Now imagine constructing a vector that points away from the model corner, right to the middle of the area spanned by virtuality, mediality and concept. It is pretty clear, that mental activity that leaves the model behind, and quite literally so, in this way will be some form of basic belief, or revelation. Religiosity (as a mental activity) may be well described as the attempt to balance virtuality, mediality and concept without resorting to any kind of explication, i.e. models. Of course, this is not possible in an absolute manner, since it is not possible to move in the aspectional space without any explication. This in turn then yields a residual that again points towards the model corner. Inversely, it is not possible to move only in the direction of the _Model.Nevertheless, there are still many people proposing such, think for instance about (abundant as well as overdone) scientism. What we can see here are particular forms of mental activity. What about other forms? For instance, the fixed-point attractor? As we have seen, our aspectional space does not allow for points as singularities. Both the semantics of the aspections as well as the structure of the space as a second-order differential prevents them. Yet, somebody could attempt to realize an orbit around a singularity that is as narrow as possible. Despite such points of absolute stability are completely illusionary, the idea of the absoluteness of ideas—idealism—represents just such an attempt. Yet, the claim of absoluteness brings mental activity to rest. It is not by accident therefore that it was the logician Frege who championed kind of a rather strange hyperplatonism. At this point we can recognize the possibility to describe different forms of mental activity using our space. Mental activity draws specific trails into our space. Moreover, our suggestion is that people prefer particular figures for whatever reasons, e.g. due to their cultural embedding, their mental capabilities, their knowledge, or even due to their basic physical constraints. Our space allows to compare, and perhaps even to construct or evolve particular figures. Such figures could be conceived as the orthoregulative instance for the conditions to know. Epistemology thus looses its claim of universality. It seems obvious to call our space a “choreostemic” space, a term which refers to choreography. Choreography means to “draw a dance”, or “drawing by dancing”, derived from Greek choreia (χορεύω) for „dancing, (round) dance”. Vera Bühlmann [19] described that particular quality as “referring to an unfixed point loosely moving within an occurring choreography, but without being orchestrated prior to and independently of such occurrence.” The notion of the choreosteme also refers to the chorus of the ancient theatre, with all its connotations, particularly the drama. Serving as an announcement for part V of D&R, Deleuze writes: However, what carries out the third aspect of sufficient reason—namely, the element of potentiality in the Idea? No doubt the pre-quantitative and pre-qualitative dramatisation. It is this, in effect, which determines or unleashes, which differenciates the differenciation of the actual in its correspondence with the differentiation of the Idea. Where, however, does this power of dramatisation come from? (DR221) It is right here, where the choreostemic space links in. The choreostemic space does not abolish the dramatic in the transition from the conditionability of Ideas into concrete thoughts, but it allows to trace and to draw, to explicate and negotiate the dramatic. In other words, it opens the possibility for a completely new game: dealing with mental attitudes. Without the choreostemic space this game is not even visible, which itself has rather unfortunate consequences. The choreostemic space is not an epistemic space either. Epistemology is concerned about the conditions that are influencing the possibility to know. Literally, episteme means “to stand near”, or “to stand over”. It draws upon a fixed perspective that is necessary to evaluate something. Yet, in the last 150 years or so, philosophy definitely has experienced the difficulties implied by epistemology as an endeavour that has been expected to contribute finally to the stabilization of knowledge. I think, the choreostemic space could be conceived as a tool that allows to reframe the whole endeavour. In other words, the problematic field of the episteme, and the related research programme “epistemology” are following an architecture (or intention), that has been set up far too narrow. That reframing, though, has become accessible only through the “results” of—or the tools provided by — the work of Wittgenstein and Deleuze. Without the recognition of the role of language and without a renewal of the notion of the virtual, including the invention of the concept of the differential, that reframing would not have been possible at all. Before we are going to discuss further the scope of the choreostemic space and the purposes it can serve, we have to correct the Cartesian view that slipped in through our metaphorical references. The Cartesian flavour keeps not only a certain arbitrariness alive, as the four conceptual aspects _Aare given just by some subjective empirical observations. It also keeps us stick completely within the analytical space, hence with a closed approach that again would need a mystical external instance for its beginning. This we have to correct now. 7. Reason and Sufficiency Our choreostemic space is built as an aspectional space that is spanned by transcendental entities. As such, they reflect the implied conditionability of concrete entities like definitions, models or media. The _Conceptcomprises any potential concrete concept, the _Modelcomprises any actual model of whatsoever kind and expressed in whatsoever symbolic system, the _Medialitycontains the potential for any kind of media, whether more material or more immaterial in character. The transcendental status of these aspects also means that we never can “access” them in their “pure” form. Yet, due to these properties our space allows to map any mental activity, not just of the human brain. In a more general perspective, our space is the space where the _Comparison takes place. The choreostemic space is of course itself a model. Given the transcendentality of the four conceptual aspects _A,we can grasp the self-referentiality. Yet, this neither does result in an infinite regress, nor in circularity. This would be the case only if the space would be Cartesian and the topological structure would be flat (Euclidean) and global. First, we have to consider that the choreostemic space is not only model, precisely due to its self-referentiality. Second, it is a tool, and as such it is not time-inert as a physical law. Its relevance unfolds only if it is used. This, however, invokes time and activity. Thus the choreostemic space could be conceived also as a means to intensify the virtual aspects of thought. Furthermore, and third, it is of course a concept, that is, it is an instance of the _Concept.As such, it should be constructed in a way that abolishes any possibility for a Cartesio-Euclidean regression. All these aspects are covered by the topological structure of the choreostemic space: It is meant to be a second-order differential. A space made by the second-order differential does not contain items. It spawns procedures. In such a space it is impossible to stay at a fixed point. Whenever one would try to determine a point, one would be accelerated away. The whole space causes divergence of mental activities. Here we find the philosophical reason for the impossibility to catch a thought as a single entity. We just mentioned that the choreostemic space does not contain items. Due to the second-order differential it is not made up as a set of coordinates, or, if we’d consider real scaled dimensions, as potential sets of coordinates. Quite to the opposite, there is nothing determinable in it. Yet, in rear-view, or hindsight, respectively, we can reconstruct figures in a probabilistic manner. The subject of this probabilism is again not determinable coordinates, but rather clouds of probabilities, quite similar to the way things are described in quantum physics by the Schrödinger equation. Unlike the completely structureless and formless clouds of probability which are used in the description of electrons, the figures in our space can take various, more or less stable forms. This means that we can try to evolve certain choreostemic figures and even anticipate them, but only to a certain degree. The attractor of a chaotic system provides a good metaphor for that: We clearly can see the traces in parameter space as drawn by the system, yet, the system’s path as described by a sequence of coordinates remains unpredictable. Nevertheless, the attractor is probabilistically confined to a particular, yet cloudy “figure,” that is, an unsharp region in parameter space. Transitions are far from arbitrary. Hence, we would propose to conceive the choreostemic space as being made up from probabilistic situs (pl.). Transitions between situs are at the same time also transformations. The choreostemic space is embedded in its own mediality without excluding roots in external media. Above we stuffed the space with a hyperbolic topology in order to align to the transcendentality of the conceptual aspects. It is quite important to understand that the choreostemic space does not implement a single, i.e. global hyperbolic relation. In contrast, each situs serves as point of reference. Without this relativity, the choreostemic space would be centred again, and in consequence it would turn again to the analytic and totalising side. This relativity can be regarded as the completed and subjectivising Cartesian delocalization of the “origin”. It is clear that the distance measures of any two such relative hyperbolic spaces do not coincide any more. There is neither apriori objectivity nor could we expect a general mapping function. Approximate agreement about distance measures may be achievable only for reference systems that are rather close to each other. The choreostemic space comprises any condition of any mental attitude or thought. We already mentioned it above: The corollary of that is that the choreostemic space is the space of _Comparisonas a transcendental category. It comprises the conditions for the whole universe of Ideas, it is an entirety. Here, it is again the topological structure of the space that saves us from mental dictatorship. We have to perform a double instantiation in order to arrive at a concrete thought. It is somewhat important to understand that these instantiations are orthoregulated. It is clear that the choreostemic space destroys the idea of a uniform rationality. Rationality can’t be tied to truth, justice or utility in an objective manner, even if we would soften objectivity as a kind of relaxed intersubjectivity. Rationality depends completely on the preferred or practiced figures in the choreostemic space. Two persons, or more generally, two entities with some mental capacity, could completely agree on the facts, that is on the percepts, the way of their construction, and the relations between them, but nevertheless assign them completely different virtues and values, simply for the fact that the two entities inhabit different choreostemic attractors. Rationality is global within a specific choreostemic figure, but local and relative with regard to that figure. The language game of rationality therefore does not refer to a particular attitude towards argumentation, but quite in contrast, it includes and displays the will to establish, if not to enforce uniformity. Rationality is the label for the will to power under the auspices of logic and reductionism. It serves as the display for certain, quite critical moral values. Thus, the notion of sufficient reason looses its frightening character as well. As any other principle of practice it gets transformed into a strictly local principle, retaining some significance only with regard to situational instrumentality. Since the choreostemic space is a generative space, locality comprises temporal locality as well. According to the choreostemic space, sufficient reasons can’t even be transported between subsequent situations. In terms of the choreostemic space notions like rationality or sufficient reason are relative to a particular attractor. In different attractors their significance could be very different, they may bear very different meanings. Viewed from the opposite direction, we also can see that a more or less stable attractor in the choreostemic has first to form, or: to be formed, before there is even the possibility for sufficient reasons. This goes straightly parallel to Wittgenstein’s conception of logic as a transcendental apriori that possibly becomes instantiated only within the process of an unfolding Lebensform. As a contribution to political reason, the choreostemic space it enables persons inhabiting different attractors, following different mental styles. Later, we will return to this aspect. In D&R, Deleuze explicated the concept of the “Image of Thought”, as part III of D&R is titled. There he first discusses what he calls the dogmatic image of thought, comprised according to him from eight elements that together lead to the concept of the idea as an representation (DR167). Following that we insists that the idea is bound to repetition and difference (as differenciation and differentiation), where repetition introduces the possibility of the new, as it is not the repetition of the same. Nevertheless, Deleuze didn’t develop this Image into a multiplicity, as it could have been expected from a more practical perspective, i.e. the perspective of language games. These games are different from his notion emphasizing at several instances that language is a rich play. For me it seems that Deleuze didn’t (want to) get rid of ontology, hence he did not conceive of his great concept of the “differential” as a language game, and in turn he missed to detect the opportunity for self-referentiality or even to apply it in a self-referential manner. We certainly do therefore not agree with his attempt to ground the idea of sufficient reason as a global principle. Since “sufficient reason” is a practice I think it is not possible or not sufficient to conceive of it as a transcendental guideline. 8. Elective Kinships It is pretty clear that the choreostemic space is applicable to many problematic fields concerning mental attitudes, and hence concerning cultural issues at large, reaching far beyond the specificity of individual domains. As we will see, the choreostemic space may serve as a treatment for several kinds of troublesome aberrances, in philosophy itself as well as in its various applications. Predominantly, the choreostemic space provides the evolutionary perspective towards the self-containing theoretical foundation of plurality and manifoldness.26 Comparing that with Hegel’s slogans of “the synthesis of the nation’s reason“ (“Synthese des Volksgeistes“) or „The Whole is the Truth“ („Das Ganze ist das Wahre“) shows the difference regarding its level and scope. Before we go into the details of the dynamics that unfolds in the choreostemic space, we would like to pick up on two areas, the philosophy of the episteme and the relationship between anthropology and philosophy. 8.1. Philosophy of the Episteme The choreostemic space is not about a further variety of some epistemological argument. It is thought as a reframing of the concerns that have been addressed traditionally by epistemology. (Here, we already would like to warn of the misunderstanding that the choreostemic space exhausts as epistemology.) Hence, it should be able to serve (as) the theoretical frame for the sociology of science or the philosophy of science as well. Think about the work of Bruno Latour [9], Karin Knorr Cetina [10] or Günther Ropohl [11] for the sociology of science or the work of van Fraassen [12] of Giere [13] for the field of philosophy of science. Sociology and philosophy, and quite likely any of the disciplines in human sciences, should indeed establish references to the mental in some way, but rather not to the neurological level, and—since we have to avoid anthropological references—to cognition as it is currently understood in psychology as well. Giere, for instance, brings the “cognitive approach” and hence the issue of practical context close to the understanding of science, criticizing the idealising projection of unspecified rationality: Philosophers’ theories of science are generally theories of scientific rationality. The scientist of philosophical theory is an ideal type, the ideally rational scientist. The actions of real scientists, when they are considered at all, are measured and evaluated by how well they fulfill the ideal. The context of science, whether personal, social or more broadly cultural, is typically regarded as irrelevant to a proper philosophical understanding of science” (p.3). The “cognitive approach” that Giere proposes as a means to understand science is, however, threatened seriously by the fact that there is no consensus about the mental. This clearly conflicts with the claim of trans-cultural objectivity of contemporary science. Concerning cognition, there are still many simplistic paradigms around, recently seriously renewed by the machine learning community. Aaron Ben Ze’ev [14] writes critically: In the schema paradigm [of the mind, m.], which I advocate, the mind is not an internal container but a dynamic system of capacities and states. Mental properties are states of a whole system, not internal entities within a particular system. […] Novel information is not stored in a separate warehouse, but is ingrained in the constitution of the cognitive system in the form of certain cognitive structures (or schemas). […] The attraction of the mechanistic paradigm is its simplicity; this, however, is an inadequate paradigm, because it fails to explain various relevant phenomena. Although the complex schema paradigm does not offer clear-cut solutions, it offers more adequate explanations. How problematic even such critiques are can be traced as soon as we remember Wittgenstein’s mark on “mental states” (Brown Book, p.143): There is a kind of general disease of thinking which always looks for (and  finds) what would be called a mental state from which all our acts spring as from a reservoir. In the more general field of epistemology there is still no sign for any agreement about the concept of knowledge. From our position, this is little surprising. First, concepts can’t be defined at all. All we can find are local instances of the transcendental entity. Second, knowledge and even its choreostemic structure is dependent on the embedding culture while at the same time it is forming the culture. The figures in the choreostemic space are attractors: They do not prescribe the next transformation, but they constrain the possibility for it. How ever to “define” knowledge in an explicit, positively representationalist manner? For instance, knowledge can’t be reduced to confirmed hypotheses qua validated models. It is just impossible in principle to say “knowledge is…”, since this implies inevitably the demand for an objective justification. At most, we can take it as a language game. (Thus the choreosteme, that is, the potential of building figures in the choreostemic space, should not be mixed with the episteme! We will return to this issue later again.) Yet, just to point to the category of the mental as a language game does not feel satisfying at all. Of course, Wittgenstein’s work sheds bright light on many aspects of mentality. Nevertheless, we can’t use Wittgenstein’s work as a structure; it is itself to be conceived as a result of a certain structuredness. On the other hand, it is equally disappointing to rely on the scientific approach to the mental. In some way, we need a balanced view, which additionally should provide the possibility for a differential experimentation with mechanisms of the mental. Just that is offered by the choreostemic space. We may relate disciplinary reductionist models to concepts as they live in language games without any loss and without getting into troubles as well. Let us now see what is possible by means of the choreostemic space and the anti-ontological T-Bar-Theory for the terms believing, referring, explicating, understanding and knowing. It might be relevant to keep in mind that by “mental activities” we do not refer to any physical or biochemical process. We distinguish the mental from the low-level affairs in the brain. Beliefs, or believing, are thus considered to be language games. From that perspective our choreostemic space just serves as a tool to externalize language in order to step outside of it, or likewise, to get able to render important aspects of playing the language game visible. The category of beliefs, or likewise the activity of believing27, we already met above. We characterised it as a mental activity that leaves the model behind. We sharply refute the quite abundant conceptualisation of beliefs as kind of uncertainty in models. Since there is no certainty at all, not even with regard to transcendental issues, such would make little sense. Actually, the language game of believing shows its richness even on behalf of a short investigation like this one. Before we go into details here let us see how others conceive of it. PMS Hacker [27] gave the following summary: Over the last two and a half centuries three main strands of opinion can be discerned in philosophers’ investigations of believing. One is the view that believing that p is a special kind of feeling associated with the idea that p or the proposition that p. The second view is that to believe that p is to be in a certain kind of mental state. The third is that to believe that p is to have a certain sort of disposition. Right to the beginning of his investigation, Hacker marks the technical, reductionist perspective onto believe as a misconception. This technical reductionism, which took form as so-called AGM-theory in the paper by Alchourron, Gärdenfors and Makinson [28] we will discuss below. Hacker writes about it: Before commencing analysis, one misconception should be mentioned and put aside. It is commonly suggested that to believe that p is a propositional attitude.That is patently misconceived, if it means that believing is an attitude towards a proposition. […] I shall argue that to believe that p is neither a feeling, nor a mental state, nor yet a disposition to do or feel anything. Obviously, believing has several aspects. First, it is certainly kind of a mental activity. It seems that I need not to tell anybody that I believe in order to be able to believe. Second, it is a language game, and a rich one, indeed. It seems almost to be omnipresent. As a language game, it links “I believe that” with, “I believe A” and “I believe in A”. We should not overlook, however, that these utterances are spoken towards someone else (even in inner speech), hence the whole wealth of processes and relations of interpersonal affairs have to be regarded, all those mutual ascriptions of roles, assertions, maintained and demonstrated expectations, displays of self-perception, attempts to induce a certain co-perception, and so on. We frequently cited Robert Brandom who analysed that in great detail in his “Making it Explicit”. Yet, can we really say that believing is just a mental activity? For the one, above we did not mention that believing is something like a “pure” mental activity. We clearly would reject such a claim. First, we clearly can not set the mental as such into a transcendental status, as this would lead straight to a system like Hegel’s philosophy, with all its difficulties, untenable claims and disastrous consequences. Second, it is impossible to explicate “purity”, as this would deny the fact that models are impossible without concepts. So, is it possible that a non-conscious being or entity can believe? Not quite, I would like to propose. Such an entity will of course be able to build models, even quite advanced ones, though probably not about reflective subjects as concepts or ideas. It could experience that it could not get rid of uncertainty and its closely related companion, risk. Such we can say that these models are not propositions “about” the world, they comprise uncertainty and allow to deal with uncertainty through actions in the world. Yet, the ability to deal with uncertainty is certainly not the same as believing. We would not need the language game at all. Saying “I believe that A” does not mean to have a certain model with a particular predictive power available. As models are explications, expressing a belief or experiencing the compound mental category “believing” is just the demonstration that any explication is impossible for the person. Note that we conceive of “belief “as completely free of values and also without any reference to mysticism. Indeed, the choreostemic space allows to distinguish different aspects of the “compound experience” that we call “belief”, which otherwise are not even visible as separate aspects of it. As a language game we thus may specify it as the indication that the speaker assigns—or the listener is expected to assign—a considerable portion of the subject matter to that part of the choreostemic figure that points away from the _Model.It is immediately clear from the choreostemic space that mental activity without belief is not possible. There is always a significant “rest” that could not be covered by any kind of explication. This is true for engineering and of course for any kind of social interaction, as soon as mutual expectations appear on the stage. By means of the choreostemic space we also can understand the significance of trust in any interaction with the external world. In communicative situations, this quickly may lead to a game of mutual deontic ascriptions, as Robert Brandom [15] has been arguing for in his “Making it Explicit”. Interestingly enough, belief (in its choreostemically founded version) is implied by any transition away from the _Model,for instance also in case of the transition path that ultimately is heading towards the _Concept.Even more surprising—at first sight—and particularly relevant is the “inflection dynamics” in the choreostemic space. The more one tries to explicate something the larger the necessary imports (e.g. through orthoregulations) from the other _a,and hence the larger is the propensity for an inflecting flip.28 As an example, take for instance the historical development of theories in particle physics. There, people started with rather simple experimental observations, which then have been assimilated by formal mathematical models. Those in turn led to new experiments, and so forth, until physics has been reaching a level of sophistication where “observations” are based on several, if not many layers of derived concepts. On the way, structural constants and heuristic side conditions are implied. Finally, then, the system of the physical model turns into an architectonics, a branched compound of theory-models, that sounds as trivial as it is conceptual. In case of physics, it is the so-called grand unified theory. There are several important things here. First, due to large amounts of heuristic settings and orthoregulations, such concepts can’t be proved or disproved anymore, the least by empirical observations. Second, on the achieved level of abstraction, the whole subject could be formulated in a completely different manner. Note that such a dynamic between experiment, model, theory29 and concept never has been described in a convincing manner before.30 Now that we have a differentiated picture about belief at our disposal we can briefly visit the field of so-called belief revision. Belief revision has been widely adopted in artificial intelligence and machine learning as the theory for updating a data base. Quite unfortunately, the whole theory is, well, simply crap, if we would go to apply it according to its intention. I think that we can raw some significance of the choreostemic space from this mismatch for a more appropriate treatment of beliefs in information technology. The theory of belief revision was put forward by a branch of analytical philosophy in a paper by Alchourron, Gärdenfors and Makinson (1985) [29], often abbr. as “AGM-theory.” Hansson [30] writes: A striking feature of the framework employed there [monnoo: AGM] is its simplicity. In the AGM framework, belief states are represented by deductively closed sets of sentences, called belief sets. Operations of change take the form of either adding or removing a specified sentence. Sets of beliefs are held by an agent, who establishes or maintains purely logical relations between the items of those beliefs. Hansson correctly observes that: The selection mechanism used for contraction and revision encodes information about the belief state not represented by the belief set. Obviously, such “belief sets” have nothing to do with beliefs as we know it from language game, besides the fact that is a misdone caricature. As with Pearl [23], the interesting stuff is left out: How to achieve those logical sentences at all, notably by a non-symbolic path of derivation?  (There are no symbols out there in the world.) By means of the choreostemic space we easily derive the answer: By an orthoregulated instantiation of a particular choreostemic performance in an unbounded (open) aspectional space that spans between transcendental entities. Since the AGM framework starts with or presupposes logic, it simply got stuck in symbolistic fallacy or illusion. Accordingly, Pollock & Gillies [30] demonstrate that “postulational approaches” such as the AGM-theory can’t work within a fully developed “standard” epistemology. Both are simply incompatible to each other. Closely related to believing is explicating, the latter being just the inverse of the former, pointing to the “opposite direction”. Explicating is almost identical to describing a model. The language game of “explication” means to transform, to translate and to project choreostemic figures into lists of rules that could be followed, or in other words, into the sayable. Of course, this transformation and projection is neither analytic nor neutral. We must be aware of the fact that even a model can’t be explicated completely. Else, this rule-following itself implies the necessity of believes and trust, and it requires a common understanding about the usage or the influence of orthoregulations. In other words, without an embedding into a choreostemic figure, we can’t accomplish an explication. Understanding, Explaining, Describing Outside of the perspective of the language game “understanding” can’t be understood. Understanding emerges as a result of relating the items of a population of interpretive acts. This population and the relations imposed on them are closely akin to Heidegger’s scaffold (“Gestell”). Mostly, understanding something is just extending an existent scaffold. About these relations we can’t speak clearly or in an explicit manner any more, since these relations are constitutive parts of the understanding. As all language games this too unfolds in social situations, which need not be syntemporal. Understanding is a confirming report about beliefs and expectations into certain capabilities of one’s own. Saying “I understand” may convey different meanings. More precisely, understanding may come along in different shades that are placed between two configurations. Either it signals that one believes to be able to extend just the own scaffold, one’s own future “Gestelltheit”. Alternatively it is used to indicate the belief that the extension of the scaffold is shared between individuals in such a way as to be able to reproduce the same effect as anyone else could have produced understanding the same thing. This effect could be merely instrumental or, more significantly, it could refer to the teaching of further pupils. In this case, two people understand something if they can teach another person to the same ends. Beside the performative and social aspects of understanding there are of course the mental aspects of the concept of “understanding” something. These can be translated into choreostemic terms. Understanding is less a particular “figure” in the CS than it is a deliberate visiting of the outer regions of the figure and the intentional exploration of those outposts. We understand something only in case we are aware of the conditions of that something and of our personal involvements. These includes cognitive aspects, but also the consequences of the performative parts of acts that contribute to an intensifying of the aspect of virtuality. A scientist who builds a strong model without considering his and its conditionability does not understand anything. He just would practice a serious sort of dogma (see Quine about the dogmas of empiricism here!). Such a scientist’s modeling could be replaced by that of a machine. A similar account could be given to the application of a grammar, irrespective the abstractness of that grammar. Referring to a grammar without considering its conditionability could be performed by a mindless machine as well. It would indeed remain a machine: mindless, and forever determined. Such is most, if not all of the computer software dealing with language today. We again would like to emphasize that understanding does not exhaust in the ability to write down a model. Understanding means to relate the model to concepts, that is, to trace a possible path that would point towards the concept. A deep understanding refers to the ability to extend a figure towards the other transcendental aspects in a conscious manner. Hence, within idealism and (any sort of) representationalism understanding is actually excluded. They mistake the transcendental for the empirical and vice versa, ending in a strict determinism and dogmatism. Explaining, in turn, indicates the intention to make somebody else to understand a certain subject. The infamous existential “Why?” does not make any sense. It is not just questionable why this language game should by performed at all, as the why of absolute existence can’t be answered at all. Actually, it seems to be quite different from that. As a matter of fact, we indeed play this game in a well comprehensible way and in many social situations. Conceiving “explanation” of nature as to account for its existence (as Epperson does it, see [31] p.357) presupposes that everything could turned into the sayable. It would result in the conflation of logic and factual world, something Epperson indeed proposes. Some pages later in his proposal about quantum physics he seems to loosen that strict tie when referring to Whitehead he links “understanding” to coherence and empirical adequacy. ([31] p.361) I offer this argument in the same speculative philosophical spirit in which Whitehead argued for the fitness of his metaphysical scheme to the task of understanding (though not “explaining”) nature—not by the “provability” of his first principles via deduction or demonstration, but by their evaluation against the metrics of coherence and empirical adequacy. Yet, this presents us an almost a perfect phenomenological stance, separating objects from objects and subjects. Neither coherence nor empirical adequacy can be separated from concepts, models and the embedding Lebenswelt. It expresses thus the believe of “absolute” understanding and final reason. Such ideas that are at least highly problematic, even and especially if we take into account the role Whitehead gives the “value” as an cosmological apriori. It is quite clear, that this attitude to understanding is sharply different from anything that is related to semiotics, the primacy of interpretation, to the role of language or a relational philosophy, in short, to anything what resembles even remotely to what we proposed about understanding of understanding a few lines above. The intention to make somebody else to understand a certain subject necessarily implies a theory, where theory here is understood (as we always do) as a milieu for deriving or inventing models. The “explaining game” comprises the practice of providing a general perspective to the recipient such that she or he could become able to invent such a model, precisely because a “direct” implant of an idea into someone else is quite impossible. This milieu involves orthoregulation and a grammar (in the philosophical sense). The theory and the grammar associated or embedded with it does nothing else than providing support to find a possibility for the invention or extension of a model. It is a matter of persistent exchange of models from a properly grown population of models that allow to develop a common understanding about something. In the end we then may say “yes, I can follow you!” Describing is often not distinguished (properly) from explaining. Yet, in our context of choreostemically embedded language games it is neither mysterious nor difficult to do so. We may conceive of describing just as explicating something into the sayable, the element of cross-individual alignment is not part of it, at least in a much less explicit way. Hence, usually the respective declaration will not be made. The element of social embedding is much less present. Describing pretends more or less that all the three aspects accompanying the model aspect could be neglected, particularly however the aspects of mediality and virtuality. The mathematical proof can be taken as an extreme example for that. Yet, even there it is not possible, since at least a working system of symbols is needed, which in turn is rooted in a dynamics unfolding as choreostemic figure, the mental aspect of Forms of Life. Basically, this impossibility for fixing a “position” in the choreostemic space is responsible for the so-called foundational crisis in mathematics. This crisis prevails even today in philosophy, where many people naively enough still search for absolute  justification, or truth, or at least regard such as a reasonable concept. All this should not be understood as an attempt to deny description or describing as a useful category. Yet, we should be aware that the difference to explaining is just one of (choreostemic) form. More explicitly, said difference is an affair of of culturally negotiated portions of the transcendental aspects that make up mental life. I hope this sheds some light on Wittgenstein’s claim that philosophy should just describe, but not explain anything. Well, the possibly perceived mysteriousness may vanish as well, if we remember is characterisation of grammar Both, understanding and explaining are quite complicated socially mediated processes, hence they unfold upon layers of milieus of mediality. Both not only relate to models and concepts that need to exist in advance and thus to a particular dynamics between them, they require also a working system of symbols. Models and concepts relate to each other only as instances of _Models and _Concepts,that is in a space as it is provided by the choreostemic space. Talking about understanding as a practice is not possible without it. Referring to something means to point to the expectation that the referred entity could point to the issue at hand. Referring is not “pointing to” and hence does not consist of a single move. It is “getting pointed to”. Said expectation is based on at least one model. Hence, if we refer to something, we put our issue as well as ourselves into the context of a chain of signifiers. If we refer to somebody, or to a named entity, then this chain of interpretive relations transforms in one of two ways. Either the named entity is used, that is, put into a functional context, or more precisely, by assigning it a sayable function. The functionalized entity does not (need to) interpret any more, all activity gets centralized, which could be used as the starting point for totalizing control. This applies to any entity, whether it is just material or living, social. The second way how referencing is affected by names concerns the reference to another person, or a group of persons. If it is not a functional relationship, e.g. taking the other as a “social tool”, it is less the expected chaining as signifier by the other person. Persons could not be interpreted as we interpret things or build signs from signals. Referring to a person means to accept the social game that comprises (i) mutual deontic assignments that develop into “roles”, including deontic credits and their balancing (as first explicated by Brandom [15]), (ii) the acceptance of the limit of the sayable, which results in a use of language that is more or less non-functional, always metaphorical and sometimes even poetic, as well as (iii) the declared persistence for repeated exchanges. The fact that we interpret the utterances of our partner within the orthoregulative milieu of a theory of mind (which builds up through this interpretations) means that we mediatize our partner at least partially. The limit of the sayable is a direct consequence of the choreostemic constitution of performing thinking. The social is based on communication, which means “to put something into common”; hence, we can regard “communication” as the driving, extending and public part of using sign systems. As a proposed language game, “functional communication” is nonsense, much like the utterance “soft stone”. By means of the choreostemic space we also can see that any referencing is equal to a more or less extensive figure, as models, concepts, performance and mediality is involved. At first hand, we could suspect that before any instantiation qua choreostemic performance we can not know something positively for sure in a global manner, i.e. objectively, as it is often meant to be expressed by the substantive “knowledge”. Due to that performance we have to interpret before we could know positively and objectively. The result is that we never can know anything for sure in a global manner. This holds even for transcendental items, that is, what Kant dubbed “pure reason”. Nevertheless, the language game “knowledge” has a well-defined significance. “Knowledge” is a reasonable category only with respect to performing, interpreting (performance in thought) and acting (organized performance). It is bound to a structured population of interpretive situations, to Peircean signs. We thus find a gradation of privacy vs. publicness with respect to knowledge. We just have to keep in mind that neither of these qualities could be thought of as being “pure”. Pure privacy is not possible, because there is nothing like a private language (meaning qua usage and shared reference). Pure publicness is not possible because there is the necessity of a bodily rooted interpreting mechanism (associative structure). Things like “public space” as a purely exterior or externalized thing do not exist. The relevant issue for our topic of a machine-based episteme is that functionalism always ends in a denial of the private language argument. We now can see easily why knowledge could not be conceived as a positively definable entity that could be stored or transferred as such. First, it is of course a language game. Second, and more important, “knowing {of, about, that}” always relates to instances of transcendental entities, and necessarily so. Third, even if we could agree on some specific way of instantiating the transcendental entities, it always invokes a particular figure unfolding in an aspectional space. This figure can’t be transferred, since this would mean that we could speak about it outside of itself. Yet, that’s not possible, since it is in turn impossible to just pretend to follow a rule. Given this impossibility we should stay for a moment at the apparent gap opened by it towards teaching. How to teach somebody something if knowledge can’t be transferred? The answer is furnished by the equipment that is shared among the members of a community of speakers or co-inhabitants of the choreostemic space. We need this equipment for matching the orthoregulation of our rule-following. The parts, tools and devices of this equipment are made from palpable traditions, cultural rhythms, institutions, individual and legal preferences regarding the weighting of individuals versus the various societal clusters, the large story of the respective culture and the “templates” provided by it, the consciously accessible time horizon, both to the past and the future31, and so on. Common sense wrongly labels the resulting “setup” as “body of values”. More appropriately, we could call it grammatical dynamics. Teaching, then, is in some way more about the reconstruction of the equipment than about the agreement of facts, albeit the arrangement of the facts may tell us a lot about the grammar. Saying ‘I know’ means that one wants to indicate that she or he is able to perform choreostemically with regard to the subject at hand. In other words, it is a label for a pointer (say reference) to a particular image of thought and its use. This includes the capability of teaching and explaining, which probably are the only way to check if somebody really knows. We can, however, not claim that we are aligned to a particular choreostemic dynamics. We only can believe that our choreostemic moves are part of a supposed attractor in the choreostemic space. From that also follows that knowledge is not just about facts, even if we would conceive of facts as a compound of fixed relations and fixed things. The traditional concerns of epistemology as the discipline that asks about the conditions of knowing and knowledge must be regarded as a misplaced problem. Usually, epistemology does not refer to virtuality or mediality. Else, in epistemology knowledge is often sharply separated from belief, yet for the wrong reasons. The formula of “knowledge as justified belief” puts them both onto the same stage. It then would have to be clarified what “justified” should mean, which is not possible in turn. Explicating “justifying” would need reference to concepts and models, or rather the confinement to a particular one: logic. Yet, knowledge and belief are completely different with regard to their role in choreostemic dynamics. While belief is an indispensable element of any choreostemic figure, knowledge is the capability to behave choreostemically. 8.2. Anthropological Mirrors Philosophy suffers even more from a surprising strangeness. As Marc Rölli recently mentioned [34] in his large work about the relations between anthropology and philosophy (KAV), Since more than 200 years philosophy is anthropologically determined. Yet, philosophy didn’t investigate the relevance of this fact to any significant extent. (KAV15)32 Rölli agrees with Nietzsche regarding his critique of idealism. “Nietzsche’s critique of idealism, which is available in many nuances, always targeting the philosophical self-misunderstanding of the pure reason or pure concepts, is also directed against a certain conception of nature.” (KAV439)33. …where this rejected certain conception of nature is purposefulness. In nature there is no forward directed purpose, no plan. Such ideas are either due to religious romanticism or due to a serious misunderstanding of the Darwinian theory of natural evolution. In biological nature, there is only blind tendency towards the preference of intensified capability for generalization34. Since Kant, and inclusively him, and in some way already Descartes, philosophy has been influenced by scientific, technological or anthropological conceptions about nature in general, or the nature of the human mind. Such is (at least) problematic for three reasons. First, it constitutes a misunderstanding of the role of philosophy to rely on scientific insights. Of course, this perspective is becoming (again) visible only today, notably after the Linguistic Turn as far as it regards non-analytical philosophy. Secondly, however, it is clear that the said influence implies, if it remains unreflected, a normative tie to empiric observations. This clearly represents a methodological shortfall. Thirdly, even if one would accept a certain link between anthropology and philosophy, the foundations taken from a “philosophy of nature”35 are so simplistic, that they hardly could be regarded as viable. This almost primitive image about the purposeful nature finally flowed into the functionalism of our days, whether in philosophy (Habermas) or so-called neuro-philosophy, by which many feel inclined to establish a variety of determinism that is even proto-Hegelian. In the same passage that invokes Nietzsche’s critique, Rölli cites Friedrich Albert Lange [39] “The topic that we actually refer to can be denoted explicitly. It is quasi the apple in the logical lapse of German philosophy subsequent to Kant: the relation between subject and object within knowledge.” (KAV443)36 Lange deliberately attests Kant—in contrast to the philosophers of the German idealism— to be clear about that relationship. For Kant subject and object constitute only as an amalgamate, the pure whatsoever has been claimed by Hegel, Schelling and their epigones and inheritors. The intention behind introducing pureness, according to Lange, is to support absolute reason or absolute understanding, in other words, eternally justified reason and undeniability of certain concepts. Note that German Idealism was born before the foundational crisis in mathematics, that started with Russell’s remark on Frege’s “Begriffsschrift” and his “all” quantor, that found its continuation in the Hilbert programme and that finally has been inscribed to the roots of mathematics by Goedel. Philosophies of “pureness” are not items of the past, though. Think about materialism, or about Agamben’s “aesthetics of pure means”, as Benjamin Morgan [39] correctly identified the metaphysical scaffold of Agamben’s recent work. Marc Rölli dedicates all of the 512 pages to the endeavor to destroy the extra-philosophical foundations of idealism. As the proposed alternative we find pragmatism, that is a conceptual foundation of philosophy that is based on language and Life form (Lebenswelt in the Wittgensteinian sense). He concludes his work accordingly: After all it may have become more clear that this pragmatism is not about a simple, naive pragmatism, but rather about a pragmatism of difference37 that has been constructed with great subtlety. (KAV512)38 Rölli’s main target is German Idealism. Yet, undeniably Hegelian philosophy is not only abundant on the European continent, where it is the Frankfurt School from Adorno to Habermas and even K.-O. Apel, followed by the ill-fated ideas of Luhmann that are infected by Hegel as well. Significant traces of it can be found in Germany’s society also in contemporary legal positivism and the oligarchy of political parties. During the last 20 years or so, Hegelian positions spread considerably also in anglo-american philosophy and political theory. Think about Hard and Negri, or even the recent works of Brian Massumi. Hegelian philosophy, however, can’t be taken in portions. It is totalitarian all through, because its main postulates such as “absolute reason” are totalizing by themselves. Hegelian philosophy is a relic, and a quite dangerous one, regardless whether you interpret it in a leftist (Lenin) or in a rightist (Carl Schmitt) manner. With its built-in claim for absoluteness the explicit denial of context-specificity, of the necessary relativity of interpretation, of the openness of future evolution, of the freedom inscribed deeply even into the basic operation of comparison, all of these positions turn into transcendental aprioris. The same holds for the claim that things, facts, or even norms can be justified absolutely. No further comment should be necessary about that. The choreostemic space itself can not result in a totalising or even totalitarian attitude. We met this point already earlier when we discussed the topological structure of the space and its a-locational “substance” (Reason and Sufficiency). As Deleuze emphasized, there is a significant difference between entirety and completeness, which just mirrors the difference between the virtual and the actual. We’d like to add that the choreostemic space also disproves the possibility for universality of any kind of conception. In some way, yet implicitly, the choreostemic space defends humanity against materiality and any related attitude. Even if we would be determined completely on the material level, which we are surely not39, the choreostemic space proofs the indeterminateness and openness of our mental life. You already may have got the feeling that we are going to slip into political theory. Indeed, the choreostemic space not only forms a space indeterminateness and applicable pre-specificity, it provides also a kind of a space of “Swiss neutrality”. Its capability to allow for a comparison of collective mental setups, without resorting to physicalist concepts like swarms or mysticistic concepts like “collective intelligence”, provides a fruitful ground for any construction of transitions between choreostemic attractors. Despite the fact that the choreostemic space concerns any kind of mentality, whether seen as hosted more by identifiable individuals or by collectives, the concept should not be taken as an actual philosophy of reason (“Philosophie des Geistes”). It transcends it as it does regarding any particular philosophical stance. It would be wrong as well to confine it into an anthropology or an anthropological architecture of philosophy, as it is the case not only in Hegel (Rölli, KAV137). In some way, it presents a generative zone for a-human philosophies, without falling prey to the necessity to define what human or a-human should mean. For sure, here we do not refer to transhumanism as it is known today, which just follows the traditional anthropological imperative of growth (“Steigerungslogik”), as Rölli correctly remarks (KAV459). A-Human simply means that as a conception it is neither dependent nor confined to the human Lebenswelt. (We again would like to stress the point that it does neither represent a positively sayable universalism not even kind of a universal procedural principle, and as well that this “a-” should also not be understood as “anti” or “opposed”, simply as “being free of”). It is this position that is mandatory to draw comparisons40 and, subsequently, conclusions (in the form of introduced irreversibilities) about entities that belong to strikingly different Lebenswelten (forms of life). Any particular philosophical position immediately would be guilty in applying human scales to non-human entities. That was already a central cornerstone of Nietzsche’s critique not only of German philosophy of the 19th century, but also of natural sciences. 8.3. Simplicissimi Rölli criticizes the uncritical adoption of items taken from the scientific world view by philosophy in the 19th century. Today, philosophy is still not secured against simplistic conceptions, uncritically assimilated from certain scientific styles, despite the fact that nowadays we could know about the (non-analytic) Linguistic Turn, or the dogmatics in empiricism. What I mean here comprises two conceptual ideas, the reduction of living or social system to states and the notion of exception or that of normality respectively. There are myriads of references in the philosophy of mind invoking so-called mental states. Yet, not only in the philosophy of mind one can find the state as a concept, but also in political theory, namely in Giorgio Agamben’s recent work, which also builds heavily on the notion of the “state of exception”. The concept of a mental state is utter nonsense, though, and mainly so for three very different reasons. The first one can be derived from the theory of complex systems, the second one from language philosophy, and the third one from the choreostemic space. In complex systems, the notion of a state is empty. What we can observe subsequent to the application of some empiric modeling is that complex systems exhibit meta-stability. It looks as if they are stable and trivial. Yet, what we could have learned mainly from biological sciences, but also from their formal consideration as complex systems, is that they aren’t trivial. There is no simple rule that could describe the flow of things in a particular period of time. The reason is precisely that they are creative. They build patterns, hence the build a further “phenomenal” level, where the various levels of integration can’t be reduced to one another. They exhibit points of bifurcation, which can be determined only in hindsight. Hence, from the empirical perspective we only can estimate the probability for stability. This, however, is clearly to weak as to support the claim of “states”. Actually, from the perspective of language-oriented philosophy, the notion of a state is even empty for any dynamical system that is subject to open evolution (but probably even for trivial dynamic systems). A real system does not build “states”. There are only flows and memories. “State” is a concept, in particular, an idealistic—or at least an idealizing—concept that are only present in the interpreting entity. The fact that one first has to apply a model before it is possible to assign states is deliberately peculated whenever it is invoked by an argument that relates to philosophy or to any (other) kind of normativity. Therefore, the concept of “state” can’t be applied analytically, or as a condition in a linearly arranged argument. Saying that we do not claim that the concept of state is meaningless at large. In natural science, especially throughout the process of hypothesis building, the notion of state can be helpful (sometimes, at least). Yet, if one would use it in philosophy in a recurrent manner, one would quickly arrive at the choreostemic space (or something very similar), where states are neither necessary nor even possible. Despite that a “state” is only assigned, i.e. as a concept, philosophers of mind41 and philosophers of political theory alike (as Agamben [37] among other materialists) use it as a phenomenal reference. It is indeed somewhat astonishing to observe this relapse into naive realism within the community of otherwise trained philosophers. One of the reasons for this may well be met in the missing training in mathematics.42 The third argument against the reasonability of the notion of “state” in philosophy can be derived from the choreostemic space. A cultural body comprises individual mentality as well as a collective mentality based on externalized symbolic systems like language, to make a long story short. Both together provide the possibility for meaning. It is absolutely impossible to assign a “state” to a cultural body without loosing the subject of culture itself. It would be much like a grammatical mistake. That “subject” is nothing else than a figurable trace in the choreostemic space. If one would do such an assignment instead, any finding would be relevant only within the reduced view. Hence, it would be completely irrelevant, as it could not support the self-imposed pragmatics. Continuing to argue about such finding then establishes a petitio principii: One would find only what you originally assumed. The whole argument would be empty and irrelevant. Similar arguments can be put forward regarding the notion of the exceptional, if it is applied in contexts that are governed by concepts and their interpretation, as opposed to trivial causal relationships. Yet, Giorgio Agamben indeed started to built a political theory around the notion of exception [37], which—at first sight strange enough—already triggered an aesthetics of emergency. Elena Bellina [38] cites Agamben: The state of exception “is neither external nor internal to the juridical order, and the problem of defining it concerns a threshold, or a zone of indifference, where inside and outside do not exclude each other but rather blur with each other.” In this sense, the state of exception is both a structured or rule-governed and an anomic phenomenon: “The state of exception separates the norm from its application in order to make its application possible. It introduces a zone of anomie into the law in order to make the effective regulation of the real possible.” It results in nothing else than disastrous consequences if the notion of the exception would be applied to areas where normativity is relevant, e.g. in political theory. Throughout history there are many, many terrible examples for that. It is even problematic in engineering. We may even call it fully legitimized “negativity engineering”, as it establishes completely unnecessary the opposite of the normal and the deviant as an apriori. The notion of the exception presumes total control as an apriori. As such, it is opposed to the notion of openness, hence it also denies the primacy of interpretation. Machines that degenerate and that would produce disasters on any malfunctioning can’t be considered as being built smartly. In a setup that embraces indeterminateness, there is even no possibility for disastrous fault. Instead, deviances are defined only with respect to the expectable, not against an apriori set, hence obscure, normality. If the deviance is taken as the usual (not the normal, though!), fault-tolerance and even self-healing could be built in as a core property, not as an “exception handling”. Exception is the negative category to the normal. It requires models to define normality, models to quantify the deviation and finally also arbitrary thresholds to label it. All of the three steps can be applied in linear domains only, where the whole is dependent on just very few parameters. For social mega-systems as societies it is nothing else than a methodological categorical illusion to apply the concept of the exception. 9. Critique of Paradoxically Conditioned Reason Nothing could be more different to that than pragmatism, for which the choreostemic space can serve as the ultimate theory. Pragmatism always suffered from—or at least has been violable against—the reproach of relativism, because within pragmatism it is impossible to argue against it. With the choreostemic space we have constructed a self-sufficient, self-containing and a necessary model that not only supports pragmatism, but also destroys any possibility of universal normative position or normativity. Probably even more significant, it also abolishes relativism through the implied concept of the concrete choreostemic figure, which can be taken as the differential of the institution or the of tradition43. Choreostemic figures are quite stable since they relate to mentality qua population, which means that they are formed as a population of mental acts or as mental acts of the members of a population. Even for individuals it is quite hard to change the attractor inhabited in choreostemic space, to change into another attractor or even to build up a new one. In this section we will check out the structure of the way we can use the choreostemic space. Naively spoken we could ask for instance, how can we derive a guideline to improve actions? How can we use it to analyse a philosophical attitude or a political writing? Where are the limits of the choreostemic space? The structure behind such questions concerns a choice on a quite fundamental level. The issue is whether to argue strictly in positive terms, to allow negative terms, or even to define anything starting from negative terms only. In fact, there are quite a few of different possibilities to arrange any melange of positivity or negativity. For instance, one could ontologically insist first on contingency as a positivity, upon then constraints would act as a negativity. Such traces we will not follow here. We regard them either as not focused enough or, most of them, as being infected by realist ontology. In more practical terms this issue of positivity and negativity regards the way of how to deal with justifications and conditions. Deleuze argues for strict positivity; in that he follows Spinoza and Nietzsche. Common sense, in contrast, is given only as far as it is defined against the non-common. In this respect, any of the existential philosophical attitudes, whether Christian religion, phenomenology or existentialism, are quite similar to each other. Even Levinas’ Other is infected by it. Admittedly, at first hand it seems quite difficult, if not impossible, to arrive at an appropriate valuation of other persons, the stranger, the strange, in short, the Other, but also the alienated. Or likewise, how to derive or develop a stance to the world that does not start from existence. Isn’t existence the only thing we can be sure about? And isn’t the external, the experience the only stable positivity we can think about? Here, we shout a loud No! Nevertheless we definitely do not deny the external either. We just mentioned that the issue of justification is invoked by our interests here. This gives rise to ask about the relation of the choreostemic space to epistemology. We will return to this in the second half of this section. Positivity. Negativity. Obviously, the problem of the positive is not the positive, but how we are going to approach it. If we set it primary, we first run into problems of justification, then into ethical problems. Setting the external, the existence, or the factual positive as primary we neglect the primacy of interpretation. Hence, we can’t think about the positive as an instance. We have to think of it as a Differential. The Differential is defined as an entirety, yet not instantiated. Its factuality is potential, hence its formal being is neither exhaustive nor limiting its factuality, or positivity. Its givenness demands for action, that is for a decision (which is sayable regarding its immediacy) bundled with a performance (which is open and just demonstrable as a matter of fact). The concept of choreosteme follows closely Deleuze’s idea of the Differential: It is built into the possibility of expressibility that spans as the space between the _Directionsas they are indicated by the transcendental aspects _A.The choreostemic space does not constitute a positively definable stance, since the space for it, the choreostemic space is not made from elements that could be defined apriori to any moment in time. Nevertheless it is well-defined. In order to provide an example which requires a similar approach we may refer to the space of patterns as they are potentially generated by Turing-systems. The mechanics of Turing-patterns, its mechanism, is well-defined as well, it is given in its entirety, but the space of the patterns can’t be defined positively. Without deep interpretation there is nothing like a Turing-pattern. Maybe, that’s one of the reasons that hard sciences still have difficulties to deal adequately with complexity. Besides the formal description of structure and mechanism of our space there is nothing left about one could speak or think any further. We just could proceed by practicing it. This mechanism establishes a paradoxicality insofar as it does not contain determinable locations. This indeterminateness is even much stronger than the principle of uncertainty as it is known from quantum physics, which so far is not constructed in a self-referential manner (at least if we follow the received views). Without any determinate location, there seems to be no determinable figure either, at least none of which we could say that we could grasp them “directly”, or intuitively. Yet, figures may indeed appear in the choreostemic space, though only by applying orthoregulative scaffolds, such as traditions, institutions, or communities that form cultural fields of proposals/propositions (“Aussagefeld”), as Foucault named it [40]. The choreostemic space is not a negativity, though. It does not impose apriori determinable factual limits to a real situation, whether internal or external. It even doesn’t provide the possibility for an opposite. Due to its self-referentiality it can be instantiated into positivity OR negativity, dependent on the “vector”—actually, it is more a moving cloud of probabilities—one currently belongs to or that one is currently establishing by one’s own  performances. It is the necessity of choice itself, appearing in the course of instantiation of the twofold Differential, that introduces the positive and the negative. In turn, whenever we meet an opposite we can conclude that there has been a preceding choice within an instantiation. Think about de Saussure structuralist theory of language, which is full of opposites. Deleuze argues (DR205) that the starting point of opposites betrays language: In other words, are we not on the lesser side of language rather than the side of the one who speaks and assigns meaning? Have we not already betrayed the nature of the play of language – in other words, the sense of that combinatory, of those imperatives or linguistic throws of the dice which, like Artaud’s cries, can be understood only by the one who speaks in the transcendent exercise of language? In short, the translation of difference into opposition seems to us to concern not a simple question of terminology or convention, but rather the essence of language and the linguistic Idea. In more traditional terms one could say it is dependent on the “perspective”. Yet, the concept of “perspective” is fallacious here, at least so, since it assumes a determinable stand point. By means of the choreostemic space, we may replace the notion of perspectives by the choreostemic figure, which reflects both the underlying dynamics and the problematic field much more adequately. In contrast to the “perspective”, or even of such, a choreostemic figure spans across time. Another difference is that a perspective needs to be taken, which does not allow for continuity, while a choreostemic figure evolves continually. The possibility for negativity is determined along the instantiation from choreosteme to thought, while the positivity is built into the choreostemic space as a potential. (Negative potentials are not possible.) Such, the choreostemic space is immune to any attempt—should we say poison pill?—to apply a dialectic of the negative, whether we consider single, double, or absurdly enough multiply repeated ones. Think about Hegel’s negativity, Marx’s rejection and proposal for a double negativity, or the dropback by Marcuse, all of which must be counted simply as stupidity. Negativity as the main structural element of thinking did not vanish, though, as we can see in the global movement of anti-capitalism or the global movement of anti-globalization. They all got—or still get—victimized by the failure to leave behind the duality of concepts and to turn them into a frame of quantitability. A recent example for that ominous fault is given by the work of Giorgio Agamben; Morgan writes: Given that suspending law only increases its violent activity, Agamben proposes that ‘deactivating’ law, rather erasing it, is the only way to undermine its unleashed force. (p.60) The first question, of course, is, why the heck does Agamben think that law, that is: any lawfulness, must be abolished. Such a claim includes the denial of any organization and any institution, above all, as practical structures, as immaterial infrastructures and grounding for any kind of negotiation. As Rölli noted in accordance to Nietzsche, there is quite an unholy alliance between romanticism and modernism. Agamben, completely incapable of getting aware of the virtual and of the differential alike, thus completely stuck in a luxurating system of “anti” attitudes, finds himself faced with quite a difficulty. In his mono-(zero) dimensional modernist conception of world he claims: “What is found after the law is not a more proper and original use value that precedes law, but a new use that is born only after it. And use, which has been contaminated by law, must also be freed from its value. This liberation is the task of study, or of play.” Is it really reasonable to demand for a world where uses, i.e. actions, are not “contaminated” by law? Morgan continues: In proposing this playful relation Agamben makes the move that Benjamin avoids: explicitly describing what would remain after the violent destruction of normativity itself. ‘Play’ names the unknowable end of ‘divine violence’. Obviously, Agamben never realized any paradox concerning rule-following. Instead, he runs amok against his own prejudices. “Divine violence” is the violence of ignorance. Yet, abolishing knowledge does not help either, nor is it an admirable goal in itself. As Derrida (another master of negativity) before him, in the end he demands for stopping interpretation, any and completely. Agamben provides us nothing else than just another modernist flavour of a philosophy of negativity that results in nihilistic in-humanism (quite contrary to Nietzsche, by the way). It is somewhat terrifying that Agamben receives not jut little attention currently. In the last statement we are going to cite from Morgan, we can see in which eminent way Agamben is a thinker of the early 19th century, incapable to contribute any reasonable suggestion to current political theory: But it is not only the negative structure of the argument but also the kind of negativity that is continuous between Agamben’s analyses of aesthetic and legal judgement. In other words, ‘normality without a norm’, which paradoxically articulates the subtraction of normativity from the normal, is simply another way of saying ‘law without force or application’. This Kantian formulation is not only fully packed with uncritical aprioris, such like normality or the normal, which marks Agamben as an epigonic utterer of common sense. As this ancient form of idealism demonstrates, Agamben obviously never heard anything of the linguistic turn as well. The unfortunate issue with Agamben’s writing is that it is considered both as influential and pace-making. So, should we reject negativity and turn to positivity? Rejecting negativity turns problematic only if it is taken as an attitude that stretches out from the principle down to the activity. Notably, the same is true for positivity. We need not to get rid of it, which only would send us into the abyss of totalised mysticism. Instead, we have to transcend them into the Differential that “precedes” both. While the former could be reframed into the conditionability of processes (but not into constraints!), the latter finds its non-representational roots in the potential and the virtual. If the positive is taken as a totalizing metaphysics, we soon end in overdone specialization, uncritical neo-liberalism or even dictatorship, or in idealism as an ideology. The turn to a metaphysics of (representational) positivity is incurably caught in the necessity of justification, which—unfortunately enough for positivists—can’t be grounded within a positive metaphysics. To justify, that is to give “good reasons”, is a contradictio in adiecto, if it is understood in its logic or idealistic form. Both, negativity and positivity (in their representational instances) could work only if there is a preceding and more or less concrete subject, which of course could not presupposed when we are talking about “first reasons” or “justification”. This does not only apply to political theory or practice, it even holds for logic as a positively given structure. Abstractly, we can rewrite the concreteness into countability. Turning the whole thing around we see that as long as something is countable we will be confined by negativity and positivity on the representational level. Herein lies the limitation of the Universal Turing Machine. Herein lies also the inherent limitation of any materialism, whether in its profane or it theistic form. By means of the choreostemic space we can see various ways out of this confined space. We may, for instance, remove the countability from numbers by mediatizing it into probabilities. Alternatively, we may introduce a concept like infinity to indicate the conceptualness of numbers and countability. It is somewhat interesting that it is the concept of the infinite that challenges the empiric character of numbers. Else, we could deny representationalism in numbers while trying to keep countability. This creates the strange category of infinitesimals. Or we create multi-dimensional number spaces like the imaginary numbers. There are, of course, many, many ways to transcend the countability of numbers, which we can’t even list here. Yet, it is of utmost importance to understand that the infinite, as any other instance of departure from countability, is not a number any more. It is not countable either in the way Cantor proposed, that is, thinking of a smooth space of countability that stretches between empiric numbers and the infinite. We may count just the symbols, but the reference has inevitably changed. The empirics is targeting the number of the symbols, not the their content, which has been defined as “incountability”. Only by this misunderstanding one could get struck by the illusion that there is something like the countability of the infinite. In some ways, even real numbers do not refer to the language game of countability, and all the more irrational numbers don’t either. It is much more appropriate to conceive of them as potential numbers; it may well be that precisely this is the major reason for the success of mathematics. The choreostemic space is the condition for separating the positive and the negative. It is structure and tool, principle and measure. Its topology implies the necessity for instantiation and renders the representationalist fallacy impossible; nevertheless, it allows to map mental attitudes and cultural habits for comparative purposes. Yet, this mapping can’t be used for modeling or anticipation. In some way it is the basis for subjectivity as pre-specific property, that is for a _Subjectivity,of course without objectivity. Therefore, the choreostemic space also allows to overcome the naïve and unholy separation of subjects and objects, without denying the practical dimension of this separation. Of course, it does so by rejecting even the tiniest trace of idealism, or apriorisms respectively. The choreostemic space does not separate apriori the individual or the collective forms of mentality. In describing mentality it is not limited to the sayable, hence it can’t be attacked or even swallowed by positivism. Since it provides the means to map those habitual _Mentalfigures, people could talk about transitions between different attractors, which we could call “choreostemic galaxies”. The critical issue of values, those typical representatives of uncritical aprioris, is completely turned into a practical concern. Obviously, we can talk about “form” regarding politics without the need to invoke aesthetics. As Benjamin Morgan recently demonstrated (in the already cited [41]), aesthetics in politics necessarily refers to idealism. Rejecting representational positivity, that is, any positivity that we could speak of in a formal manner, is equivalent to the rejection of first reason as an aprioric instance. As we already proposed for representational positivity, the claim of a first reason as a point of departure that is never revisited again results as well in a motionless endpoint, somewhere in the triangle built from materialism, idealism or realism. Attempts to soften this outcome by proposing a playful, or hypothetical, if not pragmatic, “fixation of first principles” are not convincing, mainly because this does not allow for any coherence between games, which results in a strong relativity of principles. We just could not talk about the relationships between those “firstness games”. In other words, we would not gain anything. An example for such a move is provided by Epperson [42].  Though he refers to the Aristotelian potential, he sticks with representational first principles, in his case logic in the form of the principle of the excluded middle and the principle of non-contradiction. Epperson does not get aware of the problems regarding the use of symbols in doing this. Once Wittgenstein critized the very same point in the Principia by Russell and Whitehead. Additionally, representational first principles are always transporters for ontological claims. As long as we recognize that the world is NOT made from objects, but of relations organized, selected and projected by each individual through interpretation, we would face severe difficulties. Only naive realism allows for a frictionless use of first principles. Yet, for a price that is definitely too high. We think that the way we dissolved the problem of first reason has several advantages as compared to Deleuze’s proposal of the absolute plane of immanence. First, we do not need the notion of absoluteness, which appears at several instances in Deleuze’s main works “What is Philosophy?” [35] (WIP), “Empiricism and Subjectivity [43], and his “Pure Immanence” [44]. The second problem with the plane of immanence concerns the relation between immanence and transcendence. Deleuze refers to two different kinds of transcendence. While in WIP he denounces transcendence as inappropriate due to its heading towards identity, the whole concept of transcendental empiricism is built on the Kantian invention. This two-fold measure can’t be resolved. Transcendence should not be described by its target. Third, Deleuze’s distinction between the absolute plane of immanence and the “personal” one, instantiated by each new philosophical work, leaves a major problem: Deleuze leaves completely opaque how to relate the two kinds of immanence to each other. Additionally, there is a potentially infinite number of “immanences,” implying a classification, a differential and an abstract kind of immanence, all of which is highly corrosive for the idea of immanence itself. At least, as long one conceives immanence not as an entity that could be naturalized. This way, Deleuze splits the problem of grounding into two parts: (1) a pure, hence “transcendent” immanence, and (2) the gap between absolute and personal immanence. While the first part could be accepted, the second one is left completely untouched by Deleuze. The problem of grounding has just been moved into a layer cake. Presumably, these problems are caused by the fact that Deleuze just considers concepts, or _Concepts, if we’d like to consider the transcendental version as well. Several of those imply the plane of immanence, which can’t be described, which has no structure, and which just is implied by the factuality of concepts. Our choreostemic space moves this indeterminacy and openness into a “form” aspect in a non-representational, non-expressive space with the topology of a double-differential. But more important is that we not only have a topology at our disposal which allows to speak about it without imposing any limitation, we else use three other foundational and irreducibly elements to think that space, the choreostemic space. The CS thus also brings immanence and transcendence into the same single structure. In this section we have discussed a change of perspective towards negativity and positivity. This change did become accessible by the differential structure of the choreostemic space. The problematic field represented by them and all the respective pseudo-solutions has been dissolved. This abandonment we achieved through the “Lagrangean principle”, that is, we replaced the constants—positivity and negativity respectively—by a procedure—instantiation of the Differential—plus a different constant. Yet, this constant is itself not a not a finite replacement, i.e. a “constant” as an invariance. The “constant” is only a relative one: the orthoregulation, comprising habits, traditions and institutions. Reason—or as we would like to propose for its less anthropological character and better scalability­, mentality—has been reconstructed as a kind of omnipresent reflection on the conditionability of proceedings in the choreostemic space. The conditionability can’t be determined in advance to the performed mental proceedings (acts), which for many could appear as somewhat paradoxical. Yet, it is not. The situation is quite similar to Wittgenstein’s transcendental logic that also gets instantiated just by doing something, while the possibility for performance precedes that of logic. Finally, there is of course the question, whether there is any condition that we impose onto the choreostemic itself, a condition that would not be resolved by its self-referentiality. Well, there is indeed one: The only unjustified apriori of the choreostemic space seems to be the primacy of interpretation (POI). This apriori, however, is only a weak one, and above all, a practicable one, or one that derives from the openness of the world. Ultimately, the POI in turn is a direct consequence of the time-being. Any other aspect of interpretation is indeed absorbed by the choreostemic space and its self-referentiality, hence requiring no further external axioms or the like. In other words, the starting point of the choreostemic space, or the philosophical attitude of the choreosteme, is openness, the insight that the world is far to generative as to comprehend all of it. The fact that it is almost without any apriori renders the choreostemic space suitable for those practical purposes where the openness and its sibling, ignorance, calls for dedicated activity, e.g. in all questions of cross-disciplinarity or trans-culturality. As far as different persons establish different forms of life, the choreostemic space even is highly relevant for any aspect of cross-personality. This in turn gives rise to a completely new approach to ethics, which we can’t follow here, though. <h5>Mentality without Knowledge</h5> Two of the transcendental aspects of the choreostemic space are _Model,and _Concept. The concepts of model and concept, that is, instantiations of our aspects, are key terms in philosophy of science and epistemology. Else, we proposed that our approach brings with it a new image of thought. We also said that mental activities inscribe figures or attractors into that space. Since we are additionally interested in the issue of justification—we are trying to get rid of them—the question of the relation between the choreostemic space and epistemology is being triggered. The traditional primary topic of epistemology is knowledge, how we acquire it, particularly however the questions of first how to separate it from beliefs (in the common sense) on the one hand, and second how to secure it in a way that we possibly could speak about truth. In a general account, epistemology is also about the conditions of knowledge. Our position is pretty clear: the choreostemic space is something that is categorically different from episteme or epistemology. Which are the reasons? We reject the view that truth in its usual version is a reasonable category for talking about reasoning. Truth as a property of a proposition can’t be a part of the world. We can’t know anything for sure, neither regarding the local context, nor globally. Truth is an element of logic, and the only truth we can know of is empty: a=a. Yet, knowledge is supposed to be about empirical facts (arrangements of relations). Wittgenstein thus set logic as transcendental. Only the transcendental logic can be free of semantics and thus only within transcendental logic we can speak of truth conditions. The consequence is that we can observe either of two effects. First, any actual logic contains some semantic references, because of which it could be regarded as “logic” only approximately. Second, insisting on the application of logical truth values to actual contexts instead results in a categorical fault. The conclusion is that knowledge can’t be secured neither locally from a small given set of sentences about empirical facts, nor globally. We even can’t measure the reliability of knowledge, since this would mean to have more knowledge about the fact than it is given by the local observations provide. As a result, paradoxes and antinomies occur. The only thing we can do is try to build networks of stable models for a negotiable anticipation with negotiable purposes. In other words, facts are not given by relation between objects, but rather as a system of relations between models, which as a whole is both accepted by a community of co-modelers and which provides satisfying anticipatory power. Compared to that the notion of partial truth (Newton da Costa & Steven French) is still misconceived. It keeps sticking to the wrong basic idea and as such it is inferior to our concept of the abstract model. After all, any account of truth violates the fact that it is itself a language game. Dropping the idea of truth we could already conclude that the choreostemic space is not about epistemology. Well, one might say, ok, then it is an improved epistemology. Yet, this we would reject as well. The reason for that is a grammatical one. Knowledge in the meaning of epistemology is either about sayable or demonstrable facts. If someone says “I know”, or if someone ascribes to another person “he knows”, or if a person performs well and in hindsight her performance is qualified as “based on intricate knowledge” or the like, we postulate an object or entity called knowledge, almost in an ontological fashion. This perspective has been rejected by Isabelle Peschard [45]. According to her, knowledge can’t be separated from activity, or “enaction”, and knowledge must be conceived as a social embedded practice, not as a stateful outcome. For her, knowledge is not about representation at all. This includes the rejection of the truth conditions as a reasonable part of a concept of knowledge. Else, it will be impossible to give a complete or analytical description of this enaction, because it is impossible to describe (=to explicate) the Form of Life in a self containing manner. In any case, however, knowledge is always, at least partially, about how to do something, even if it is about highly abstract issues. That means that a partial description of knowledge is possible. Yet, as a second grammatical reason, the choreostemic space does not allow for any representations at all, due to its structure, which is strictly local and made up from the second-order differential. There are further differences. The CS is a tool for the expression of mental attractors, to which we can assign distinct yet open forms. To do so we need the concepts of mediality and virtuality, which are not mentioned anywhere in epistemology. Mental attractors, or figures, will always “comprise” beliefs, models, ideas, concepts as instances of transcendental entities, and these instances are local instances, which are even individually constrained. It is not possible to explicate these attractors other than by “living” it. In some way, the choreostemic space is intimately related to the philosophy of C.S. Peirce, which is called “semiotics”. As he did, we propose a primacy of interpretation. We fully embrace his emphasis that signs only refer to signs. We agree with his attempt for discerning different kinds of signs. And we think that his firstness, secondness and thirdness could be related to the mechanisms of the choreostemic space. In some way, the CS could be conceived as a generalization of semiotics. Saying this, we also may point to the fact that Peirce’s philosophy is not  regarded as epistemology either. Rejecting the characterization of the choreostemic space as an epistemological subject we can now even better understand the contours of the notion of mentality. The “mental” can’t be considered as a set of things like beliefs, wishes, experiences, expectations, thought experiments, etc. These are just practices, or likewise practices of speaking about the relation between private and public aspects of thinking. Any of these items belong to the same mentality, to the same choreostemic figures. In contrast to Wittgenstein, however, we propose to discard completely the distinction between internal and external aspects of the mental. And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693] One of the transcendental aspects in the CS is concept, another is model. Both together are providing the aspects of use, idea and reference, that is, there is nothing internal and external any more. It simply depends on the purpose of the description, or the kind of report we want to create about the mental, whether we talk about the mental in an internalist or in externalist way, whether we talk about acts, concepts, signs, or models. Regardless, what we do as humans, it will always be predominantly a mental act, irrespective the change of material reconfigurations. 10. Conclusion It is probably not an exaggeration to say that in the last two decades the diversity of mentality has been discovered. A whole range of developments and shifts in public life may have been contributing to that, concerning several domains, namely from politics, technology, social life, behavioural science and, last but not least, brain research. We saw the end of the Cold War, which has been signalling an unrooting of functionalism far beyond the domain of politics, and simultaneously the growth and discovery of the WWW and its accompanied “scopic44 media” [46, 47]. The “scopics” spurred the so-called globalization that worked much more in favour of the recognition of diversity than it levelled that diversity, at least so far. While we are still in the midst of the popularization and increasingly abundant usage of so-called machine learning, we already witness an intensified mutual penetration and amalgamation of technological and social issues. In the behavioural sciences, probably also supported by the deepening of mediatization, an unforeseen interest in the mental and social capabilities of animals manifested, pushing back the merely positivist and dissecting description of behavior. As one of the most salient examples may serve the confirmation of cultural traditions in dolphins and orcas, concerning communication as well as highly complex collaborative hunting.  The unfolding of collaboration requires the mutual and temporal assignment of functional roles for a given task. This not only prerequisites a true understanding of causality, but even its reflected use as a game in probabilistic spaces. Let us distil three modes or forms here, (i) the animal culture, (ii) the machine-becoming and of course (iii) the human life forms in the age of intensified mediatization. All three modes must be considered as “novel” ones, for one reason or another. We won’t go in any further detail here, yet it is pretty clear that the triad of these three modes render any monolithic or anthropologically imprinted form of philosophy of mind impossible. In turn, any philosophy of mind that is limited to just the human brains relation to the world, or even worse, which imposes analytical, logical or functional perspectives onto it, must be considered as seriously defect. This applies still to large parts of the mainstream in philosophy of mind (and even ethics). In this essay we argued for a new Image of Thought that is independent from the experience of or by a particular form of life, form of informational45 organization or cultural setting, respectively. This new Image of Thought is represented through the choreostemic space. This space is dynamic and active and can be described formally only if it is “frozen” into an analytical reduction. Yet, its self-referentiality and self-directed generativity is a major ingredient. This self-referentiality is takes a salient role in the space’s capability to  leave its conditions behind. One of the main points of the choreostemic space (CS) probably is that we can not talk about “thought”—regardless its quasi-material and informational foundations—without referring to the choreostemic space. It is a (very) strong argument against Rylean concepts about the mind that claim the irrelevance of the concept of the mental by proposing that looking at the behavior is sufficient to talk about the “mind”. Of course, the CS does not support “the dogma of the ghost in the machine“ either. The choreostemic space defies (and helps to defy) any empirical and so also anthropological myopias through its triple-feature of transcendental framing, differential operation and immanent rooting. Such it is immune against naturalist fallacies such as Cartesian dualism as well as against arbitrariness or relativism. Neither it could be infected by any kind of preoccupation such like idealism or universalism. Despite one could regard it in some way as “pure Thought”, or consider it as the expressive situs of it, its purity is not an idealistic one. It dissolves either into the metaphysical transcendentality of the four conceptual aspects _a,that is, the _Model, _Mediality,_Concept,and _Virtuality.Or it takes the form of the Differential that could be considered as being kind of a practical transcendentality46 [48].  There, as one of her starting points Bühlmann writes: Deleuze’s fundamental critique in Difference and Repetition is that throughout the history of philosophy, these conditions have always been considered as »already confined« in one way or another: Either within »a formless, entirely undifferentiated underground« or »abyss« even, or within the »highly personalized form« of an »autocratically individuated Being« Our choreostemic space provides also the answer to the problematics of conditions.47  As Deleuze, we suggest to regard conditions only as secondary, that is as relevant entities only after any actualization. This avoids negativity as a metaphysical principle. Yet, in order to get completely rid of any condition while at the same time retain conditionability as a transcendental entity we have to resort to self-referentiality as a generic principle. Hence, our proposal goes beyond Deleuze’s framework as he developed it from “Difference and Repetition” until “What is Philosophy?”, since he never made this move. Basically, the CS supports Wittgenstein’s rejection of materialism, which experienced a completely unjustified revival in the various shades of neuro-isms. Malcolm cites him [49]: It makes as little sense to ascribe experiences, wishes, thoughts, beliefs, to  a brain as to a mushroom. (p.186) This support should not surprise, since the CS was deliberately constructed to be compatible with the concept of language game. Despite the CS also supports his famous remark about meaning: it is also clear that the CS may be taken as a means to overcome the debate about external or internal primacies or foundations of meaning. The duality of internal vs. external is neutralized in the CS. While modeling and such the abstract model always requires some kind of material body, hence representing the route into some interiority, the CS is also spanned by the Concept and by Mediality. Both concepts are explicit ties between any kind of interiority and and any kind of exteriority, without preferring a direction at all. The proposal that any mental activity inscribes attractors into that space just means that interiority and exteriority can’t be separated at all, regardless the actual conceptualisation of mind or mentality. Yet, in accordance with PI 693 we also admit that the choreostemic space is not equal to the mental. Any particular mentality unfolds as an actual performance in the CS. Of course, the CS does not describe material reconfigurations, environmental contingency etc. and the performance taking place “there”. In other words, it does not cover any aspect of use. On the other hand, material reconfiguration are simply not “there” as long as they do not get interpreted by applying some kind of model. The CS clearly shows that we should regard questions like “Where is the mind?” as kind of a grammatical mistake, as Blair lucidly demonstrates [50]. Such a usage of the word “mind” not only implies irrevocably that it is a localizable entity. It also claims its conceptual separatedness. Such a conceptualization of the mind is illusionary. The consequences for any attempt to render “machines” “more intelligent” are obviously quite dramatic. As for the brain, it is likewise impossible to “localize” mental capacities in the case of epistemic machines. This fundamental de-territorialization is not a consequence of scale, as in quantum physics. It is a consequence of the verticality of the differential, the related necessity of forms of construction and the fact, that a non-formal, open language, implying randolations to the community, is mandatory to deal with concepts. One important question about a story like the “choreostemic space” with its divergent, but nevertheless intimately tied four-fold transcendentality is about the status of that space. What “is” it? How could it affect actual thought? Since we have been starting even with  mathematical concepts like space, mappings, topology, or differential, and since our arguments frequently invokes the concept of mechanism,one could suspect that it is a piece of analytical philosophy. This ascription we can clearly reject. Peter Hacker convincingly argues that “analytical philosophy” can’t be specified by a set of properties of such assumed philosophy. He proposes to consider it as a historical phase of philosophy, with several episodes, beginning around 1890 [53]. Nevertheless, during the 1970ies a a set of believes formed kind of a basic setup. Hacker writes: But there was broad consensus on three points. First, no advance in philosophical understanding can be expected without the propaedeutic of investigating the use of the words relevant to the problem at hand. Second, metaphysics, understood as the philosophical investigation into the objective, language-independent, nature of the world, is an illusion. Third, philosophy, contrary to what Russell had thought, is not continuous with, but altogether distinct from science. Its task, contrary to what the Vienna Circle averred, is not the clarification or ‘improvement’ of the language of science. Where we definitely disagree is at the point about metaphysics. Not only do we refute the view that metaphysics is about the objective, language-independent, nature of the world. As such we indeed would reject metaphysics. An example for this kind of thinking is provided by the writing of Whitehead. It should have become clear throughout our writing that we stick to the primacy of interpretation, and accordingly we do regard the believe in an objective reality as deeply misconceived. Thereby we do neither claim that our mental life is independent from the environment—as radical constructivism (Varela & Co) does—nor do we claim that there is no external world around us that is independent from our perception and constructions. Such is just belief in metaphysical independence, which plays an important tole in modernism. The idea of objective reality is also infected by this belief, resulting in a self-contradiction. For “objective” makes sense only as an index to some kind of sociality, and hence to a group sharing a language, and further to the use of language. The claim of “objective reality is thus childish. More important, however, we have seen that the self-referentiality of terms like concept (we called those “strongly singular terms“) enforces us to acknowledge that Concept, much like logic, is a transcendental category. Obviously we refer strongly to transcendental, that is metaphysical categories. At the same time we also propose, however, that there are manifolds of instances of those transcendental categories. The choreostemic space describes a mechanism. In that it resembles to the science of biology, where the concept of mechanism is an important epistemological tool. As such, we try to defend against mysticism, against the threat that is proposed by any all too quick reference to the “Lebenswelt”, the form of life and the ways of living. But is it really an “analysis”? Putnam called “analysis” an “inexplicable noise”[54]. His critique was precisely that semantics can’t be found by any kind of formalization, that is outside of the use of language. In this sense we certainly are not doing analytic philosophy. As a final point we again want to emphasize that it is not possible to describe the choreostemic space completely, that is, all the conditions and effects, etc., due to its self-referentiality. It is a generative space that confirms its structure by itself. Nevertheless it is neither useless nor does it support solipsism. In a fully conscious act it can be used to describe the entirety of mental activity, and only as a fully conscious act, while this description is a fully non-representational description. In this way it overcomes not only the Cartesian dualism about consciousness. In fact, it is another way to criticise the distinction between interiority and exteriority. For one part we agree with Wittgenstein’s critique (see also the work of PMS Hacker about that), which identifies the “mystery” of consciousness as an illusion. The concept of the language game, which is for one part certainly an empiric concept, is substantial for the choreostemic space. Yet, the CS provides several routes between the private and the communal, without actually representing one or the other. The CS does no distinguish between the interior and the exterior at all, just recall that mediality is one of the transcendental aspects. Along with Wittgenstein’s “solipsistic realism” we consequently reject also the idea that ontology can be about the external world, as this again would introduce such a separation. Quite to the contrast, the CS vanishes the need for the naive conception of ontology. Ontology makes sense only within the choreostemic space. Yet, we certainly embrace the idea that mental processes are ultimately “based” on physical matter, but unfolded into and by their immaterial external surrounds, yielding an inextricable compound. Referring to any “neuro” stuff regarding the mental does neither “explain” anything nor is it helpful to any regard, whether one considers it as neuro-science or as neuro-phenomenology. Summarizing the issue we may say that the choreostemic space opens a completely new level for any philosophy of the mental, not just what is being called the human “mind”. It also allows to address scientific questions about the mental in a different way, as well as it clarifies the route to machines that could draw their own traces and figures into that space. It makes irrepealable clear that any kind of functionalism or materialism is once and for all falsified. Let us now finally inspect our initial question that we put forward in the editorial essay. Is there a limit for the mental capacity of machines? If yes, which kind of limit and where could we draw it? The question about the limit of machines directly triggers the question about the image of humanity („Bild des Menschen“), which is fuelled from the opposite direction. So, does this imply kind of a demarcation line between the domain of the machines and the realm of the human? Definitely not, of course. To opt for such a separation would not only follow idealist-romanticist line of critizising technology, but also instantiate a primary negativity. Based on the choreostemic space, our proposal is a fundamentally different one. It can be argue that this space can contains any condition of any thought as an population of unfolding thoughts. These unfoldings inscribe different successions into the space, appearing as attractors and figures. The key point of this is that different figures, representing different Lebensformen (Forms of Life) that are probably even incommensurable to each other, can be related to each other without reducing any of them. The choreostemic space is a space of mental co-habitation. Let us for instance start with the functionalist perspective that is so abundant in modernism since the times of Descartes. A purely functionalist stance is just a particular figure in that space, as it applies to any other style of thinking. Using the dictum of the choreosteme as a guideline, it is relatively easy to widen the perspective into a more appropriate one. Several developmental paths into a different choreostemic attractor are possible. For instance, mediatization through social embedding [52], opening through autonomous associative mechanisms as we have described it, or the adhoc recombination of conceptual principles as it has been demonstrated by Douglas Hofstadter. Letting a robot range freely around also provokes the first tiny steps away from functionalism, albeit the behavioral Bauplan of the insects (arthropoda) demonstrates that this does not install a necessity for the evolutionary path to advanced mental capabilities. The choreostemic space can serve as such a guideline because it is not infected by anthropology in any regard. Nevertheless it allows to speak clearly about concepts like belief and knowledge, of course, without reducing these concepts to positive definite or functionalist definitions. It also remains completely compatible with Wittgenstein’s concept of the language game. For instance, we reconstructed the language game “knowing” as a label for a pointer (say reference) to a particular image of thought and its use. Of course, this figure should not be conceived as a fixed point attractor, as the various shades of materialism, idealism and functionalism actually would do (if they would argue along the choreosteme). It is somewhat interesting that here, by means of the choreostemic space, Wittgenstein and Deleuze approach each other quite closely, something they themselves would not have been supported, probably. Where is the limit of machines, then? I guess, any answer must refer to the capability to leave a well-formed trace in the choreostemic space. As such, the limits of machines are to be found in the same way as they are found for us humans: To feel and to act as an entity that is able to contribute to culture and to assimilate it in its mental activity. We started the choreostemic space as a framework to talk about thinking, or more general: about mentality, in a non-anthropological and non.-reductionist manner. In the course of our investigation, we found a tool that actualizes itself into real social and cognitive situations. We also found the infinite space of choreostemic galaxies as attractors for eternal returns without repetition of the identical. Choreosteme keeps the any alive, without subjugating individuality, it provides a new and extended level of sayability without falling into representationalism. Taken together, as a new Image of Thought it allows to develop thinking deliberately and as part of a multitudinous variety. 1. This piece is thought of as a close relative to Deleuze’s Difference & Repetition (D&R)[1]. Think of it as a satellite of it, whose point of nearest approach is at the end of part IV of D&R, and thus also as a kind of extension of D&R. 2. Deleuze of course, belongs to them, but of course also Ludwig Wittgenstein (see §201 of PI [2], “paradox” of rule following), and Wilhelm Vossenkuhl [3], who presented three mutually paradoxical maxims as a new kind of a theory of morality (ethics), that resists the reference to monolithically set first principles, such as for instance in John Rawls’ “Theory of Justice”. The work of those philosophers also provides examples of how to turn paradoxicality productive, without creating paradoxes at all, the main trick being to overcome their fixation by a process. Many others, including Derrida, just recognize paradoxes, but are neither able to conceive of paradoxicality nor to distinguish them from paradoxes, hence they take paradoxes just as unfortunate ontological knots. In such works, one can usually find one or the other way to prohibit interpretation (think about the trail, grm. “Spur” in Derrida) 3. Paradoxes and antinomies like those described by Taylor, Banach-Tarski, Russell or of course Zenon are all defect, i.e. pseudo-paradoxes, because they violate their own “gaming pragmatics”. They are not paradoxical at all, but rather either simply false or arbitrarily fixed within the state of such violation. The same fault is committed by the Sorites paradox and its relatives. They are all mixing up—or colliding—the language game of countability or counting with the language game of denoting non-countability, as represented by the infinite or the infinitesimal. Instead of saying that they violate the apriori self-declared “gaming pragmatics” we also could say that they change the most basic reference system on the fly, without any indication of doing so. This may happen through an inadequate use of the concept of infiniteness. 4. DR 242 eternal return: it is not the same and the identical that returns, but the virtual structuredness (not even a “principle”), without which metamorphosis can’t be conceived. 5. In „Difference and Repetition“, Deleuze chose to spell “Idea” with a capital letter, in order to distinguish his concept from the ordinary word. 7. Here we find interesting possibilities for a transition to Alan Turing‘s formal foundation of creativity [5]. 8. This includes the usage of concepts like virtuality, differential, problematic field, the rejection of the primacy of identity and closely related to that, the rejection of negativity, the rejection of the notion of representation, etc. Rejecting the negative opens an interesting parallel to Wittgenstein’s insisting on the transcendentality of logics and the subordination of any practical logic to performance. Since the negative is a purely symbolic entity, it is also purely aposteriori to any genesis, that is self-referential performance. 9. I would like to recommend to take a look to the second part of part IV in D&R, and maybe, also to the concluding chapter therein (download it here). 10. Saying „we“ here is not just due to some hyperbolic politeness. The targeted concept of this essay, the choreosteme, has been developed by Vera Bühlmann and the author of this essay (Klaus Wassermann) in close collaboration over a number of years. Finally the idea proofed to be so strong that now there is some dissent about the role and the usage of the concept. 11. For belief revision as described by others, overview @ Stanford, a critique by Pollock, who clarified that belief revision as comprised and founded by the AGM theory (see below) is incompatible to  standard epistemology. 12. By symbolism we mean the belief that symbols are the primary and apriori existent entities for any description of any problematic field. In machine-based epistemology for instance, we can not start with data organized in tables because this pre-supposes a completed process of “ensymbolization”. Yet, in the external world there are no symbols, because symbols only exist subsequent to interpretation. We can see that symbolism creates the egg-chick-problem. 13. Miriam Meckel, communication researcher at the university of Zürich, is quite active in drawing dark-grey pictures. Recently, she coined “Googlem” as a resemblance to Google and Golem. Meckel commits several faults in that: She does not understand the technology(accusing Google to use averages), and she forgets about the people (programmers) behind “the computer”, and the people using the software as well. She follows exactly the pseudo-romantic separation between nature and the artificial. Miriam Meckel, Next. Erinnerungen an eine Zukunft ohne uns,  Rowohlt 2011. 14. Here we find a resemblance to Wittgenstein’s denial to attribute philosophy the role of an enabler of understanding. According to Wittgenstein, philosophy even does not and can not describe. It just can show. 15. This also concerns the issue of cross-culturality. 16. Due to some kind of cultural imprinting, a frequently and solitary exercised habit, people almost exclusively think of Cartesian spaces as soon as a “space” is needed. Yet, there is no necessary implication between the need for a space and the Cartesian type of space. Even Deleuze did not recognize the difficulties implied by the reference to the Cartesian space, not only in D&R, but throughout his work. Nevertheless, there are indeed passages (in What is philosophy? with “planes of immanence”, or in the “Fold”) where it seems that he could have smelled into a different conception of space. 17. For the role of „elements“ please see the article about „Elementarization“. 18. Vera Bühlmann [8]: „Insbesondere wird eine Neu-Bestimmung des aristotelischen Verhältnisses von Virtualität und Aktualität entwickelt, unter dem Gesichtspunkt, dass im Konzept des Virtuellen – in aller Kürze formuliert – das Problem struktureller Unendlichkeit auf das Problem der zeichentheoretischen Referenz trifft.“ 19. which is also a leading topic of our collection of essays here. 20. e.g. Gerhard Gamm, Sybille Krämer, Friedrich Kittler 21. cf. G.C. Tholen [7], V.Bühlmann [8]. 22. see the chapter about machinic platonism. 23. Actually, Augustine instrumentalises the discovered difficulty to propose the impossibility to understand God’s creation. 24. It is an „ancestry“ only with respect to the course in time, as the result of a process, not however in terms of structure, morphology etc. 25. cf. C.S. Peirce [16], Umberto Eco [17], Helmut Pape [18]; 26. Note that in terms of abstract evolutionary theory rugged fitness landscapes enforce specialisation, but also bring along an increased risk for vanishing of the whole species. Flat fitness landscapes, on the other hand, allow for great diversity. Of course the fitness landscape is not a stable parameter space, neither locally not globally. IN some sense, it is even not a determinable space. Much like the choreostemic space, it would be adequate to conceive of the fitness landscape as a space built from 2-set of transformatory power and the power to remain stability. Both can be determined only in hindsight. This paradoxality is not by chance, yet it has not been discovered as an issue in evolutionary theory. 27. Of course I know that there are important differences between verbs and substantives, which we may level out in our context without loosing too much. 28. In many societies, believing has been thought to be tied to religion, the rituals around the belief in God(s). Since the renaissance, with upcoming scientism and profanisation of societies religion and science established sort of a replacement competition. Michel Serres described how scientists took over the positions and the funds previously held by the cleric. The impression of a competition is well-understandable, of course, if we consider the “opposite direction” of the respective vectors in the choreostemic space. Yet, it is also quite mistaken, maybe itself provoked by overly idealisation, since neither the clerk can make his day without models nor the scientist his one without beliefs. 29. The concept of “theory” referred to here is oriented towards a conceptualisation based on language game and orthoregulation. Theories need to be conceived as orthoregulative milieus of models in order to be able to distinguish between models and theories, something which can’t be accomplished by analytic concepts. See the essay about theory of theory. 30. Of course, we do not claim to cover completely the relation between experiments, experience, observation on the one side and their theoretical account on the other by that. We just would like to emphasize the inextricable dynamic relation between modeling and concepts in scientific activities, whether in professional or “everyday-type” of science. For instance, much could be said in this regard about the path of decoherence from information and causality. Both aspects, the decoherence and the flip from intensifying modeling over to a conceptual form has not been conceptualized before. The reason is simple enough: There was no appropriate theory about concepts. When, for instance, Radder [28] contends that the essential step from experiment to theory is to disconnect theoretical concepts from the particular experimental processes in which they have been realized [p.157], then he not only misconceives the status and role of theories, he also does not realize that experiments are essentially material actualisations of models. Abstracting regularities from observations into models and shaping the milieu for such a model in order to find similar ones, thereby achieving generalization is anything but to disconnect them. It seems that he overshoot a bit in his critique of scientific constructivism. Additionally, his perspective does not provide any possibility to speak about the relation between concepts and models. Though Radder obviously had the feeling of a strong change in the way from putting observations into scene towards concepts, he fails to provide a fruitful picture about it. He can’t surpass that feeling towards insight, as he muses about “… ‘unintended consequences’ that might arise from the potential use of theoretical concepts in novel situations.” Such descriptions are close to scientific mysticism. Radder’s account is a quite recent one, but others are not really helpful about the relation between experiment, model and concept either. Kuhn’s praised concept of paradigmatic changes [24] can be rated at most as a phenomenological or historizing description. Sure, his approach brought a fresh perspective in times of overdone reductionism, but he never provided any kind of abstract mechanism. Other philosophers of science stuck to concepts like prediction (cf. Reichenbach [20], Salmon [21]) and causality (cf. Bunge [22], Pearl [23]), which of course can’t say anything about the relation to the category of concepts. Finally, Nancy Cartwright [25], Isabelle Stengers [26], Bruno Latour [9] or Karin Knorr Cetina [10] are representatives for the various shades of constructivism, whether individually shaped or as a phenomenon embedded into a community, which also can’t say anything about concepts as categories. A screen through the Journal of Applied Measurement did not reveal any significantly different items. Thus, so far philosophy of science, sociology and history of science have been unable to understand the particular dynamics between models and concepts as abstract categories, i.e. as _Modelsor _Concepts. 31. If the members of a community, or even the participants in random interactions within it, agree on the persistence of their relations, then they will tend to exhibit a stronger propensity towards collaboration. Robert Axelrod demonstrated that on the formal level by means of a computer experiment [33]. He has been the first one, who proposed game theory as a means to explain the choice of strategies between interactees. 32. Orig.: „Seit über 200 Jahren ist die Philosophie anthropologisch bestimmt. Was das genauer bedeutet, hat sie dagegen kaum erforscht.“ 33. Orig.: „Nietzsches Idealismuskritik, die in vielen Schattierungen vorliegt und immer auf das philosophische Selbstmissverständnis eines reinen Geistes und reiner Begriffe zielt, richtet sich auch gegen ein bestimmtes Naturverständnis.“ (KAV439) 34. More precisely, in evolutionary processes the capability for generalization is selected under conditions of scarcity. Scarcity, however, is inevitably induced under the condition of growth or consumption. It is important to understand that newly emerging levels of generalization do not replace former levels of integration. Those undergo a transformation with regard to their relations and their functional embedding, i.e. with regard to their factuality. In morphology of biological specimens this is well-known as “Überformung”. For more details about evolution and generalization please see this. 35. The notions of “philosophy of nature” or even “natural philosophy” are strictly inappropriate. Both “kinds” of philosophy are not possible at all. They have to be regarded as a strange mixture of contemporarily available concepts from science (physics, chemistry, biology), mysticism or theism and the mistaken attempt to transfer topics as such from there to philosophy. Usually, the result is simply a naturalist fallacy with serious gaps regarding the technique of reflection. Think about Kant’s physicalistic tendencies throughout his philosophy, the unholy adaptation of Darwinian theory, analytic philosophy, which is deeply influenced by cybernetics, or the comeback of determinism and functionalism due to almost ridiculous misunderstandings of the brain. Nowadays it must be clear that philosophy before the reflection of the role of language, or more general, before the role of languagability—which includes processes of symbolization and naming—can’t be regarded as serious philosophy. Results from sciences can be imported into philosophy only as formalized structural constraints. Evolutionary theory, for instance, first have to be formalized appropriately (as we did here), before it could be of any relevance to philosophy. Yet, what is philosophy? Besides Deleuze’s answer [35], we may conceive philosophy as a technique of asking about the conditionability of the possibility to reflect. Hence, Wittgenstein said that philosophy should be regarded as a cure. Thus philosophy includes fields like ethics as a theory of morality or epistemology, which we developed here into a “choreostemology”. 36. Orig.: „Der Punkt, um den es sich namentlich handelt, lässt sich ganz bestimmt angeben. Es ist gleichsam der Apfel in dem logischen Sündenfall der deutschen Philosophie nach Kant: das Verhältnis zwischen Subjekt und Objekt in der Erkenntnis.“ 37. Despite Rölli usually esteems Deleuze’s philosophy of the differential, here he refers to the difference though. I think it should be read as “divergence and differential”. 38. Orig.: „Nach allem wird klarer geworden sein, dass es sich bei diesem Pragmatismus nicht um einen einfachen Pragmatismus handelt, sondern um einen mit aller philosophischen Raffinesse konstruierten Pragmatismus der Differenz.“ 39. As scientific facts, Quantum physics, the probabilistic structure of the brain and the non-representationalist working of the brain falsify determinism as well as finiteness of natural processes, even if there should be something like “natural laws”. 40. See the article about the structure of comparison. 41. Even Putnam does so, not only in his early functionalist phase, but still in Representation and Reality [36]. 42. Usually, philosophers are trained only in logics, which does not help much, since logic is not a process. Of course, being trained in mathematical structures does not imply that the resulting philosophy is reasonable at all. Take Alain Badiou as an example, who just blows up materialism. 43. A complete new theory of governmentality and sovereignty would be possible here. 44. The notion of “scopic” media as coined by Knorr Cetina means that modern media substantially change the point of view (“scopein”, looking, viewing). Today, we are not just immersed into them, but we deliberately choose them and search for them. The change of perspective is thought to be a multitude and contracting space and time. This however, is not quite typical for the new media. 45. Here we refer to our extended view onto “information” that goes far beyond the technical reduced perspective that is forming the main stream today. Information is a category that can’t be limited to the immaterial. See the chapter about “Information and Causality”. 46. Vera Bühlmann described certain aspects of Deleuze’s philosophy as an attempt to naturalize transcendentality in the context of emergence, as it occurs in complex systems. Deleuze described the respective setting in “Logic of Sense” [49] as the 14th series of paradoxes. 47. …which is not quite surprising, since we developed the choreostemic space together. • [1] Gilles Deleuze, Difference and Repetition. Translated by Paul Patton, Athlon Press, 1994 [1968]. • [2] Ludwig Wittgenstein, Philosophical Investigations. • [3] Wilhelm Vossenkuhl. Die Möglichkeit des Guten. Beck, München 2006. • [4] Jürgen Habermas, Über Moralität und Sittlichkeit – was macht eine Lebensform »rational«? in: H. Schnädelbach (Hrsg.), Rationalität. Suhrkamp, Frankfurt 1984. • [5] Alan Turing. Chemical Basis of Morphogenesis. • [6] K. Wassermann, That Centre-Point Thing. The Theory Model in Model Theory. In: Vera Bühlmann, Printed Physics, Springer New York 2012, forthcoming. • [7] Georg Christoph Tholen. Die Zäsur der Medien. Kulturphilosophische Konturen. Suhrkamp, Frankfurt 2002. • [8] Vera Bühlmann. Inhabiting media : Annäherungen an Herkünfte und Topoi medialer Architektonik. Thesis, University of Basel 2011. available online, summary (in German language) here. • [9] Bruno Latour, • [10] Karin Knorr Cetina (1991). Epistemic Cultures: Forms of Reason in Science. History of Political Economy, 23(1): 105-122. • [11] Günther Ropohl, Die Unvermeidlichkeit der technologischen Aufklärung. In: Paul Hoyningen-Huene, & Gertrude Hirsch (eds.), Wozu Wissenschaftsphilosophie? De Gruyter, Berlin 1988. • [12] Bas C. van Fraassen, Scientific Representation: Paradoxes of Perspective. Oxford University Press, New York 2008. • [13] Ronald N. Giere, Explaining Science: A Cognitive Approach. Cambridge University Press, Cambridge 1988. • [14] Aaron Ben-Ze’ev, Is There a Problem in Explaining Cognitive Progress? pp.41-56 in: Robert F. Goodman & Walter R. Fisher (eds.), Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences) SUNY Press, New York 1995. • [15] Robert Brandom, Making it Explicit. • [16] C.S. Peirce, var. • [17] Umberto Eco, • [18] Helmut Pape, var. • [19] Vera Bühlmann, “Primary Abundance, Urban Philosophy — Information and the Form of Actuality.” pp.114-154, in: Vera Bühlmann (ed.), Printed Physics. Springer, New York 2012, forthcoming. • [20] Hans Reichenbach, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge, University of Chicago Press, Chicago, 1938. • [21] Wesley C. Salmon, Causality and Explanation. Oxford University Press, New York 1998. • [22] Mario Bunge, Causality and Modern Science. Dover Publ. 2009 [1979]. • [23] Judea Pearl , T.S. Verma (1991) A Theory of Inferred Causation. • [24] Thomas S. Kuhn, Scientific Revolutions • [25] Nancy Cartwright. var. • [26] Isabelle Stengers, Spekulativer Konstruktivismus. Merve, Berlin 2008. • [27] Peter M. Stephan Hacker, “Of the ontology of belief”, in: Mark Siebel, Mark Textor (eds.),  Semantik und Ontologie. Ontos Verlag, Frankfurt 2004, pp. 185–222. • [28] Hans Radder, “Technology and Theory in Experimental Science.” in: Hans Radder (ed.), The Philosophy Of Scientific Experimentation. Univ of Pittsburgh 2003, pp.152-173 • [29] C. Alchourron, P. Gärdenfors, D. Makinson (1985). On the logic of theory change: Partial meet contraction functions and their associated revision functions. Journal of Symbolic Logic, 50: 510–530. • [30] Sven Ove Hansson, Sven Ove Hansson (1998). Editorial to Thematic Issue on: “Belief Revision Theory Today”, Journal of Logic, Language, and Information 7(2), 123-126. • [31] John L. Pollock, Anthony S. Gillies (2000). Belief Revision and Epistemology. Synthese 122: 69–92. • [32] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38:2, 339-366. • [33] Robert Axelrod, Die Evolution der Kooperation. Oldenbourg, München 1987. • [34] Marc Rölli, Kritik der anthropologischen Vernunft. Matthes & Seitz, Berlin 2011. • [35] Deleuze, Guattari, What is Philosophy? • [36] Hilary Putnam, Representation and Reality. • [37] Giorgio Agamben, The State of Exception.University of Chicago Press, Chicago 2005. • [38] Elena Bellina, “Introduction.” in: Elena Bellina and Paola Bonifazio (eds.), State of Exception. Cultural Responses to the Rhetoric of Fear. Cambridge Scholars Press, Newcastle 2006. • [39] Friedrich Albert Lange, Geschichte des Materialismus und Kritik seiner Bedeutung in der Gegenwart. Frankfurt 1974. available online @ zeno.org. • [40] Michel Foucault, Archaeology of Knowledge. • [41] Benjamin Morgan, Undoing Legal Violence: Walter Benjamin’s and Giorgio Agamben’s Aesthetics of Pure Means. Journal of Law and Society, Vol. 34, Issue 1, pp. 46-64, March 2007. Available at SSRN: http://ssrn.com/abstract=975374 • [42] Michael Epperson, “Bridging Necessity and Contingency in Quantum Mechanics: The Scientific Rehabilitation of Process Metaphysics.” in: David R. Griffin, Timothy E. Eastman, Michael Epperson (eds.), Whiteheadian Physics: A Scientific and Philosophical Alternative to Conventional Theories. in process, available online; mirror • [43] Gilles Deleuze, Empiricism and Subjectivity. An Essay on Hume’s Theory of HUman Nature. Columbia UNiversity Press, New York 1989. • [44] Gilles Deleuze, Pure Immanence – Essays on A Life. Zone Books, New York 2001. • [45] Isabelle Peschard • [46] Knorr Cetina, Karin (2009): The Synthetic Situation: Interactionism for a Global World. In: Symbolic Interaction, 32 (1), S. 61-87. • [47] Knorr Cetina, Karin (2012): Skopische Medien: Am Beispiel der Architektur von Finanzmärkten. In: Andreas Hepp & Friedrich Krotz (eds.): Mediatisierte Welten: Beschreibungsansätze und Forschungsfelder. Wiesbaden: VS Verlag, S. 167-195. • [48] Vera Bühlmann, “Serialization, Linearization, Modelling.” First Deleuze Conference, Cardiff 2008) ; Gilles Deleuze as a Materialist of Ideality”, (lecture held at the Philosophy Visiting Speakers Series, University of Duquesne, Pittsburgh 2010. • [49] Gilles Deleuze, Logic of Sense. Columbia University Press, New York 1991 [1990]. • [50] N. Malcolm, Nothing is Hidden: Wittgenstein’s Criticism of His Early Thought,  Basil Blackwell, Oxford 1986. • [51] David Blair, Wittgenstein, Language and Information: “Back to the Rough Ground!” Springer, New York 2006. mirror • [52] Caroline Lyon, Chrystopher L Nehaniv, J Saunders (2012). Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. Available online (doi:10.1371/journal.pone.0038236) • [53] Peter M. Stephan Hacker, “Analytic Philosophy: Beyond the linguistic turn and back again”, in: M. Beaney (ed.), The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology. Routledge, London 2006. • [54] Hilary Putnam, The Meaning of “Meaning”, 1976. Dealing with a Large World June 10, 2012 § Leave a comment The world as an imaginary totality of all actual and virtual relationships between assumed entities can be described in innumerable ways. Even what we call a “characteristic” forms only in a co-dependent manner together with the formation processes of entities and relationships. This fact is particularly disturbing if we encounter something for the first time, without the guidance provided by more or less applicable models, traditions, beliefs or quasi-material constraints. Without those means any selection out of all possible or constructible properties is doomed to be fully contingent, subject to pure randomness. Yet, this does not result in results that are similarly random. Given that the equipment with tools and methods is given for a task or situation at hand, modeling is for the major part the task to reduce the infiniteness of possible selections in such a way that the resulting representation can be expected to be helpful. Of course, this “utility” is not a hard measure in itself. It is not only dependent on the subjective attitude to risk, mainly the model risk and the prediction risk, utility is also relative to the scale of the scope, in other words, whether one is interested in motor or other purely physical aspects, tactical aspects or strategic aspects, whether one is interested in more local or global aspects, both in time and space, or whether one is interested in any kind of balanced mixture of those aspects. Establishing such a mixture is a modeling task in itself, of course, albeit one that is often accomplished only implicitly. The randomness mentioned above is a direct corollary of the empirical underdetermination1. From a slightly different perspective, we also may say that it is an inevitable consequence of the primacy of interpretation. And we also should not forget that language and particularly metaphors in language—and any kind of analogical thinking as well—are means to deal constructively with that randomness, turning physical randomness into contingency. Even within the penultimate guidance of predictivity—it is only a soft guidance though—large parts of what we reasonably could conceive as facts (as temporarily fixed arrangement of relations) is mere collaborative construction, an ever undulating play between the individual and the general. Even if analogical thinking indeed is the cornerstone, if not the Acropolis, of human mindedness, it is always preceded by and always rests upon modeling. Only a model allows to pick some aspect out of the otherwise unsorted impressions taken up from the “world”. In previous chapters we already discussed quite extensively the various general as well as some technical aspects of modeling, from an abstract as well as from a practical perspective.2  Here we focus on a particular challenge, the selection task regarding the basic descriptors used to set up a particular model. Well, given a particular modeling task we have the practical challenge to reduce a large set of pre-specific properties into a small set of “assignates” that together represent in some useful way the structure of the dynamics of the system that we’d observed. How to reduce a set of properties created by observation that comprises several hundreds of them? The particular challenge arises even in the case of linear systems if we try to avoid subjective “cut-off” points that are buried deeply into the method we use. Such heuristic means are wide-spread in statistically based methods. The bad thing about that is that you can’t control their influence onto the results. Since the task comprises the selection of properties for the description of the entities (prototypes) to be formed, such arbitrary thresholds, often justified or even enforced just by the method itself, will exert a profound influence on the semantic level. In other words it corroborates its own assumption of neutrality. Yet, we also never should assume linearity of a system, because most of the interesting real systems are non-linear, even in the case of trivial machines. Brute force approaches are not possible, because the number of possible models is 2^n, with n the number of properties or variables. Non-linear models can’t be extrapolated from known ones, of course. The Laplacean demon3 became completely wrapped by Thomean folds4, being even quite worried by things like Turing’s formal creativity5. When dealing with observations from “non-linear entities”, we are faced with the necessity to calculate and evaluate any selection of variables explicitly. Assuming a somewhat phantastic figure of 0.0000001 seconds (10e-6) needed to calculate a single model, we still would need 10E15 years to visit all models if we would have to deal with just 100 variables. To make it more palpable: It would take 80 million times longer than the age of the earth, which is roughly 4.8 billion years… Obviously, we have to drop the idea that we can “proof” the optimality of a particular model. The only thing we can do is to minimize the probability that within a given time T we can find a better model. On the other hand, the data are not of unbounded complexity, since real systems are not either. There are regularities, islands of stability, so to speak. There is always some structure, otherwise the system would not persist as an observable entity. As a consequence, we can organize the optimization of “failure time probability”, we may even consider this as a second-order optimization. We may briefly note that the actual task thus is not only to select a proper set of variables, we also should identify the relations between the observed and constructed variables. Of course, there are always several if not many sets of variables that we could consider as “proper”, precisely for the reason that they form a network of relations, even if this network is probabilistic in nature and itself being kind of a model. So, how to organize this optimization? Basically, everything has to be organized as nested, recurrent processes. The overall game we could call learning. Yet, it should be clear that every “move” and every fixation of some parameter and its value is nothing else than a hypothesis. There is no “one-shot-approach”, and no linear progression either. If we want to avoid naive assumptions—and any assumption that remains untested is de facto a naive assumption—we have to test them. Everything is trial and error, or expressed in a more educated manner, everything has to be conceived as a hypothesis. Consequently we can reduce the number of variables only by a recurrent mechanism. As a lemma we conclude that any approach that reduces the number of variables not in a recurrent fashion can’t be conceived as a sound approach. Contingent Collinearities It is the structuredness of the observed entity that cause the similarity of any two observations across all available or apriori chosen properties. We also may expect that any two variables could be quite “similar”6 across all available observations. This provides the first two opportunities for reducing the size of the problem. Note that such reduction by “black-listing” applies only to the first steps in a recurrent process. Once we have evidence that certain variables do not contribute to the predictivity of our model, we may loosen the intensity of any of the reductions! Instead of removing it from the space of expressibility we may preferably achieve a weighted preference list in later stages of modeling. So, if we find n observations or variables being sufficiently collinear, we could remove a portion p(n) from this set, or we could compress them by averaging. R1: reduction by removing or compressing collinear records. R2: reduction by removing or compressing collinear variables. A feasible criterion for assessing the collinearity is the monotonicity in the relationship between two variables as it is reflected by Spearman’s correlation. We also could apply K-means clustering using all variables, then averaging all observations that are “sufficiently close” to the center of the clusters. Albeit the respective thresholding is only a preliminary tactical move, we should be aware of the problematics we introduce by such a reduction. Firstly, it is the size of the problem that brings in a notion of irreversibility, even if we are fully aware of the preliminarity. Secondly, R1 is indeed critical because it is in some quite obvious way a petitio principii. Even tiny differences in some variables could be masked by larger differences in such variables that penultimately are recognized as irrelevant. Hence, very tight constraints should be applied when performing R1. When removing collinear records we else have to care about the outcome indicator. Often, the focused outcome is much less frequent than its “opposite”. Preferably, we should remove records that are marked as negative outcome, up to a ratio of 1:1 between positive and negative outcome in the reduced data. Such “adaptive” sampling is similar to so-called “biased sampling”. Directed Collinearities Additionally to those two collinearities there is a third one, which is related to the purpose of the model. Variables that do not contribute to the predictive reconstruction of the outcome we could call “empirically empty”. R3: reduction by removing empirically empty variables Modeling without a purpose can’t be considered to be modeling at all7, so we always have a target variable available that reflects the operationalization of the focused outcome. We could argue that only those variables are interesting for a detailed inspection that are collinear to the target variable. Yet, that’s a problematic argument, since we need some kind of model to draw the decision whether to exclude a variable or not, based on some collinearity measure. Essentially, that model claims to predict the predictivity of the final model, which of course is not possible. Any such apriori “determination” of the contribution of a variable to the final predictivity of a model is nothing else than a very preliminary guess. Thus, we indeed should treat it just as a guess, i.e. we should consider it as a propensity weight for selecting the variable. In the first explorative steps, however, we could choose an aggressive threshold, causing the removal of many variables from the vector. R1 removes redundancy across observations. The same effect can be achieved by a technique called “bagging”, or similarly “foresting”. In both cases a comparatively small portion of the observations are taken to build a “small” model, where the “bag” or “forest” of all small models then are taken to build the final, compound model. Bagging as a technique of “split & reduce” can be applied also in the variable domain. R4: reduction of complexity by splitting Once an acceptable model or set of models has been built, we can check the postponed variables one after another. In the case of splitting, the confirmation is implicitly performed by weighting the individual small models. Compression and Redirection Elsewhere we already discussed the necessity and the benefits of separating the transformation of data from the association of observations. If we separate it, we can see that everything we need is an improvement or a preservation of the potential distinguishability of observations. The associative mechanism need not to “see” anything that even comes close to the raw data, as long as the resulting association of observations results in a proper derivation of prototypes.8 This opens the possibility for a compression of the observations, e.g. by the technique of random projection. Random projection maps vector spaces onto each other. If the dimensionality of the resulting vector of reduced size remains large enough (100+), then the separability of the vectors is kept intact. The reason is that in a high-dimensional vector space almost all vectors are “orthogonal” to each other. In other words, random projection does not change the structure of the relations between vectors. R5: reduction by compression During the first explorative steps one could construct a vector space of d=50, which allows a rather efficient exploration without introducing too much noise. Noise in normalized vector space essentially means to change the “direction” of the vectors, the effect of changing the length of vectors due to random projection is much less profound. Else note that introducing noise is not a bad thing at all: it helps to avoid overfitting, resulting in more robust models. If we conceive of this compression by means of random projection as a transformation, we could store the matrix of random numbers as parameters of that transformation. We then could apply it in any subsequent classification task, i.e. when we would apply the model to new observations. Yet, The transformation by random projection destroys the semantic link between observed variables and the predictivity of the model. Any of the columns after such a compression contains information from more than one of the input variables. In order to support understanding, we have to reconstruct the semantic link. That’s fortunately not a difficult task, albeit it is only possibly if we use an index that allows to identify the observations even after the transformation. The result of the building the model is a collection of groups of records, or indices, respectively. Based on these indices we simply identify those variables, which minimize the ratio of variance within the groups to the variance of the means per variable across the groups. This provides us the weights for the list of all variables, which can be used to drastically reduce the list of input variables for the final steps of modeling. The whole approach could be described as sort of a redirection procedure. We first neglect the linkage between semantics of individual variables and prediction in order to reduce the size of the task, then after having determined the predictivity we restore the neglected link. This opens the road for an even more radical redirection path. We already mentioned that all we need to preserve through transformation is the distinguishability of the observations without distorting the vectors too much. This could be accomplished not only by random projection though. If we’d interpret large vectors as a coherent “event” we can represent them by the coefficients of wavelets, built from individual observations. The only requirement is that the observations consist from a sufficiently large number of variables, typically n>500. Compression is particularly useful, if the properties, i.e. the observed variables do not bear much semantic value in itself, as it is the case in image analysis, analysis of raw sensory data, or even in case of the modeling of textual information. In this small essay we described five ways to reduce large sets of variables, or “assignates” (link) as they are called more appropriately. Since for pragmatic reasons a petitio principii can’t be avoided in attempting such a reduction, mainly due to the inevitable fact that we need a method for it, the reduction should be organized as a process that decreases the uncertainty in assigning a selection probability to the variables. Regardless the kind of mechanism to associate observations into groups and forming thereby the prototypes, a separation of transformation and association is mandatory for such a recurrent organization being possible. 1. Quine  [1] 2. see: the abstract model, modeling and category theory, technical aspects of modeling, transforming data; 3. The “Laplacean Demon” refers to Laplace’s belief that if all parts of the universe could be measured the future development of the universe could be calculated. Such it is the paradigmatic label for determinism. Today we know that even IF we could measure everything in the universe with arbitrary precision we (what we could not, of course) we even could NOT pre-calculate the further development of the universe. The universe does not develop, it performs an open evolution. 4. Rene Thom [2] was the first to explicate the mathematical theory of folds in parameter space, which was dubbed “catastrophe theory” in order to reflect the subjects experience moving around in folded parameter spaces. 5. Alan Turing not only laid the foundations of deterministic machines for performing calculations; he also derived as the first one the formal structure of self-organization [3]. Based on this formal insights we can design the degree of creativity of a system. impossibility to know for sure is the first and basic reason for culture. 6. note that determining similarity also requires apriori decisions about methods and scales, that need to be confirmed. In other words we always have to start with a belief. 7. Modeling without a purpose can’t be considered to be modeling at all. Performing a clusterization by means of some algorithm is not creating a model until we do not use it, e.g. in order to get some impression. Yet, as soon as we indeed take a look following some goal we imply a purpose. Unfortunately, in this case we would be enslaved by the hidden parameters built into the method. Things like unsupervised modeling, or “just clustering” always implies hidden targets and implicit optimization criteria, determined by the method itself. Hence, such things can’t be regarded as a reasonable move in data analysis. 8. This sheds an interesting light to the issue of “representation”, which we could not follow here. • [1] WvO Quine. Two Dogmas of Empiricism. • [2] Rene Thom. Catastrophe Theory • [3] Alan Turing (1956) Chemical basis of Morphogenesis Where Am I? You are currently viewing the archives for June, 2012 at The "Putnam Program".
00951602e3b7eab5
Heat equation From Wikipedia, the free encyclopedia   (Redirected from Heat diffusion) Jump to: navigation, search In this example, the heat equation in two dimensions predicts that if one area of an otherwise cool metal plate has been heated, say with a torch, over time the temperature of that area will gradually decrease, starting at the edge and moving inward. Meanwhile the part of the plate outside that region will be getting warmer. Eventually the entire plate will reach a uniform intermediate temperature. In this animation, both height and color are used to show temperature. Statement of the equation[edit] The behaviour of temperature when the sides of a 1D rod are at fixed temperatures (in this case, 0.8 and 0 with initial Gaussian distribution). The temperature becomes linear function, because that is the stable solution of the equation: wherever temperature has a nonzero second spatial derivative, the time derivative is nonzero as well. For a function u(x,y,z,t) of three spatial variables (x,y,z) (see cartesian coordinates) and the time variable t, the heat equation is More generally in any coordinate system: where α is a positive constant, and Δ or ∇2 denotes the Laplace operator. In the physical problem of temperature variation, u(x,y,z,t) is the temperature and α is the thermal diffusivity. For the mathematical treatment it is sufficient to consider the case α = 1. Note that the state equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or radiation). This form is more general and particularly useful to recognize which property (e.g. cp or ) influences which term. where is the volumetric heat flux. The heat equation is of fundamental importance in diverse scientific fields. In mathematics, it is the prototypical parabolic partial differential equation. In probability theory, the heat equation is connected with the study of Brownian motion via the Fokker–Planck equation. In financial mathematics it is used to solve the Black–Scholes partial differential equation. The diffusion equation, a more general version of the heat equation, arises in connection with the study of chemical diffusion and other related processes. General description[edit] Solution of a 1D heat partial differential equation. The temperature (u) is initially distributed over a one-dimensional, one-unit-long interval (x = [0,1]) with insulated endpoints. The distribution approaches equilibrium over time. Suppose one has a function u that describes the temperature at a given location (x, y, z). This function will change over time as heat spreads throughout space. The heat equation is used to determine the change in the function u over time. The rate of change of u is proportional to the "curvature" of u. Thus, the sharper the corner, the faster it is rounded off. Over time, the tendency is for peaks to be eroded, and valleys filled in. If u is linear in space (or has a constant gradient) at a given point, then u has reached steady-state and is unchanging at this point (assuming a constant thermal conductivity). The image to the right is animated and describes the way heat changes in time along a metal bar. One of the interesting properties of the heat equation is the maximum principle that says that the maximum value of u is either earlier in time than the region of concern or on the edge of the region of concern. This is essentially saying that temperature comes either from some source or from earlier in time because heat permeates but is not created from nothingness. This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below). Another interesting property is that even if u has a discontinuity at an initial time t = t0, the temperature becomes smooth as soon as t > t0. For example, if a bar of metal has temperature 0 and another has temperature 100 and they are stuck together end to end, then very quickly the temperature at the point of connection will become 50 and the graph of the temperature will run smoothly from 0 to 50. The heat equation is used in probability and describes random walks. It is also applied in financial mathematics for this reason. It is also important in Riemannian geometry and thus topology: it was adapted by Richard S. Hamilton when he defined the Ricci flow that was later used by Grigori Perelman to solve the topological Poincaré conjecture. The physical problem and the equation[edit] Derivation in one dimension[edit] The heat equation is derived from Fourier's law and conservation of energy (Cannon 1984). By Fourier's law, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across the surface, where k is the thermal conductivity and u is the temperature. In one dimension, the gradient is an ordinary spatial derivative, and so Fourier's law is In the absence of work done, a change in internal energy per unit volume in the material, ΔQ, is proportional to the change in temperature, Δu (in this section only, Δ is the ordinary difference operator with respect to time, not the Laplacian with respect to space). That is, where cp is the specific heat capacity and ρ is the mass density of the material. Choosing zero energy at absolute zero temperature, this can be rewritten as The increase in internal energy in a small spatial region of the material over the time period is given by[1] where the fundamental theorem of calculus was used. If no work is done and there are neither heat sources nor sinks, the change in internal energy in the interval [x−Δx, xx] is accounted for entirely by the flux of heat across the boundaries. By Fourier's law, this is again by the fundamental theorem of calculus.[2] By conservation of energy, This is true for any rectangle [t −Δt, t + Δt] × [x − Δx, x + Δx]. By the fundamental lemma of the calculus of variations, the integrand must vanish identically: Which can be rewritten as: which is the heat equation, where the coefficient (often denoted α) is called the thermal diffusivity. An additional term may be introduced into the equation to account for radiative loss of heat, which depends upon the excess temperature u = TTs at a given point compared with the surroundings. At low excess temperatures, the radiative loss is approximately μu, giving a one-dimensional heat-transfer equation of the form At high excess temperatures, however, the Stefan–Boltzmann law gives a net radiative heat-loss proportional to , and the above equation is inaccurate. For large excess temperatures, , giving a high-temperature heat-transfer equation of the form where . Here, σ is Stefan's constant, ε is a characteristic constant of the material, p is the sectional perimeter of the bar and A is its cross-sectional area. However, using T instead of u gives a better approximation in this case. Three-dimensional problem[edit] In the special cases of wave propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is • u = u(x, y, z, t) is temperature as a function of space and time; • is the rate of change of temperature at a point over time; • uxx, uyy, and uzz are the second spatial derivatives (thermal conductions) of temperature in the x, y, and z directions, respectively; • is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity k, the mass density ρ, and the specific heat capacity cp. The heat equation is a consequence of Fourier's law of conduction (see heat conduction). If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume an exponential bound on the growth of solutions.[3] Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods. The heat equation is the prototypical example of a parabolic partial differential equation. Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as where the Laplace operator, Δ or ∇2, the divergence of the gradient, is taken in the spatial variables. The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black–Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis. The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead – like a partial differential equation involving a second-order time derivative. Some models of nonlinear heat conduction (which are also parabolic equations) have solutions with finite heat transmission speed.[4][5] Internal heat generation[edit] The function u above represents temperature of a body. Alternatively, it is sometimes convenient to change units and represent u as the heat density of a medium. Since heat density is proportional to temperature in a homogeneous medium, the heat equation is still obeyed in the new units. Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume (e.g., in watts/litre - W/L) at a rate given by a known function q varying in space and time.[6] Then the heat per unit volume u satisfies an equation For example, a tungsten light bulb filament generates heat, so it would have a positive nonzero value for q when turned on. While the light is turned off, the value of q for the tungsten filament would be zero. Solving the heat equation using Fourier series[edit] Idealized physical setting for heat conduction in a rod with homogeneous boundary conditions. The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Let us consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is where u = u(x, t) is a function of two variables x and t. Here • x is the space variable, so x ∈ [0, L], where L is the length of the rod. • t is the time variable, so t ≥ 0. We assume the initial condition where the function f is given, and the boundary conditions Let us attempt to find a solution of (1) that is not identically zero satisfying the boundary conditions (3) but with the following property: u is a product in which the dependence of u on x, t is separated, that is: This solution technique is called separation of variables. Substituting u back into equation (1), Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus: We will now show that nontrivial solutions for (6) for values of λ ≤ 0 cannot occur: 1. Suppose that λ < 0. Then there exist real numbers B, C such that From (3) we get X(0) = 0 = X(L) and therefore B = 0 = C which implies u is identically 0. 2. Suppose that λ = 0. Then there exist real numbers B, C such that X(x) = Bx + C. From equation (3) we conclude in the same manner as in 1 that u is identically 0. 3. Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that From (3) we get C = 0 and that for some positive integer n, This solves the heat equation in the special case that the dependence of u has the special form (4). In general, the sum of solutions to (1) that satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by Other closed-form solutions are available.[7] Generalizing the solution technique[edit] The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenvectors. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators. Consider the linear operator Δu = uxx. The infinite sequence of functions for n ≥ 1 are eigenvectors of Δ. Indeed Moreover, any eigenvector f of Δ with the boundary conditions f(0) = f(L) = 0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means Finally, the sequence {en}nN spans a dense linear subspace of L2((0, L)). This shows that in effect we have diagonalized the operator Δ. Heat conduction in non-homogeneous anisotropic media[edit] In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space. • The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density , so that • Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area dS and with unit normal vector n is Thus the rate of heat flow into V is also given by the surface integral where n(x) is the outward pointing normal vector at x. • The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient where A(x) is a 3 × 3 real matrix that is symmetric and positive definite. • By the divergence theorem, the previous surface integral for heat flow into V can be transformed into the volume integral • The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ Putting these equations together gives the general equation of heat flow: • The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x: κ=. • In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity . • In the anisotropic case where the coefficient matrix A is not scalar and/or if it depends on x, then an explicit formula for the solution of the heat equation can seldom be written down. Though, it is usually possible to consider the associated abstract Cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup. Fundamental solutions[edit] A fundamental solution, also called a heat kernel, is a solution of the heat equation corresponding to the initial condition of an initial point source of heat at a known position. These can be used to find a general solution of the heat equation over certain domains; see, for instance, (Evans 1998) for an introductory treatment. In one variable, the Green's function is a solution of the initial value problem where δ is the Dirac delta function. The solution to this problem is the fundamental solution One can obtain the general solution of the one variable heat equation with initial condition u(x, 0) = g(x) for −∞ < x < ∞ and 0 < t < ∞ by applying a convolution: In several spatial variables, the fundamental solution solves the analogous problem The n-variable fundamental solution is the product of the fundamental solutions in each variable; i.e., The general solution of the heat equation on Rn is then obtained by a convolution, so that to solve the initial value problem with u(x, 0) = g(x), one has The general problem on a domain Ω in Rn is with either Dirichlet or Neumann boundary data. A Green's function always exists, but unless the domain Ω can be readily decomposed into one-variable problems (see below), it may not be possible to write it down explicitly. Other methods for obtaining Green's functions include the method of images, separation of variables, and Laplace transforms (Cole, 2011). Some Green's function solutions in 1D[edit] A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere.[8] In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation where f is some given function of x and t. Homogeneous heat equation[edit] Initial value problem on (−∞,∞) Comment. This solution is the convolution with respect to the variable x of the fundamental solution and the function g(x). (The Green's function number of the fundamental solution is X00.) Therefore, according to the general properties of the convolution with respect to differentiation, u = g ∗ Φ is a solution of the same heat equation, for so that, by general facts about approximation to the identity, Φ(⋅, t) ∗ gg as t → 0 in various senses, according to the specific g. For instance, if g is assumed bounded and continuous on R then Φ(⋅, t) ∗ g converges uniformly to g as t → 0, meaning that u(x, t) is continuous on R × [0, ∞) with u(x, 0) = g(x). Initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions Comment. This solution is obtained from the preceding formula as applied to the data g(x) suitably extended to R, so as to be an odd function, that is, letting g(−x) := −g(x) for all x. Correspondingly, the solution of the initial value problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. The Green's function number of this solution is X10. Initial value problem on (0,∞) with homogeneous Neumann boundary conditions Comment. This solution is obtained from the first solution formula as applied to the data g(x) suitably extended to R so as to be an even function, that is, letting g(−x) := g(x) for all x. Correspondingly, the solution of the initial value problem on R is an even function with respect to the variable x for all values of t > 0, and in particular, being smooth, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. The Green's function number of this solution is X20. Problem on (0,∞) with homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions Comment. This solution is the convolution with respect to the variable t of and the function h(t). Since Φ(x, t) is the fundamental solution of the function ψ(x, t) is also a solution of the same heat equation, and so is u := ψ ∗ h, thanks to general properties of the convolution with respect to differentiation. Moreover, so that, by general facts about approximation to the identity, ψ(x, ⋅) ∗ hh as x → 0 in various senses, according to the specific h. For instance, if h is assumed continuous on R with support in [0, ∞) then ψ(x, ⋅) ∗ h converges uniformly on compacta to h as x → 0, meaning that u(x, t) is continuous on [0, ∞) × [0, ∞) with u(0, t) = h(t). Inhomogeneous heat equation[edit] Problem on (-∞,∞) homogeneous initial conditions Comment. This solution is the convolution in R2, that is with respect to both the variables x and t, of the fundamental solution and the function f(x, t), both meant as defined on the whole R2 and identically 0 for all t → 0. One verifies that which expressed in the language of distributions becomes where the distribution δ is the Dirac's delta function, that is the evaluation at 0. Problem on (0,∞) with homogeneous Dirichlet boundary conditions and initial conditions Comment. This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions Comment. This solution is obtained from the first formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an even function of the variable x, that is, letting f(−x, t) := f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an even function with respect to the variable x for all values of t, and in particular, being a smooth function, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. Since the heat equation is linear, solutions of other combinations of boundary conditions, inhomogeneous term, and initial conditions can be found by taking an appropriate linear combination of the above Green's function solutions. For example, to solve let u = w + v where w and v solve the problems Similarly, to solve let u = w + v + r where w, v, and r solve the problems Mean-value property for the heat equation[edit] Solutions of the heat equations satisfy a mean-value property analogous to the mean-value properties of harmonic functions, solutions of though a bit more complicated. Precisely, if u solves where Eλ is a "heat-ball", that is a super-level set of the fundamental solution of the heat equation: Notice that as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough. Conversely, any function u satisfying the above mean-value property on an open domain of Rn × R is a solution of the heat equation. This can be shown by an argument similar to the analogous one for harmonic functions. Steady-state heat equation[edit] The steady-state heat equation is by definition not dependent on time. In other words, it is assumed conditions exist such that: This condition depends on the time constant and the amount of time passed since boundary conditions have been imposed. Thus, the condition is fulfilled in situations in which the time equilibrium constant is fast enough that the more complex time-dependent heat equation can be approximated by the steady-state case. Equivalently, the steady-state condition exists for all cases in which enough time has passed that the thermal field u no longer evolves in time. In the steady-state case, a spatial thermal gradient may (or may not) exist, but if it does, it does not change in time. This equation therefore describes the end result in all thermal problems in which a source is switched on (for example, an engine started in an automobile), and enough time has passed for all permanent temperature gradients to establish themselves in space, after which these spatial gradients no longer change in time (as again, with an automobile in which the engine has been running for long enough). The other (trivial) solution is for all spatial temperature gradients to disappear as well, in which case the temperature become uniform in space, as well. The equation is much simpler and can help to understand better the physics of the materials without focusing on the dynamic of the heat transport process. It is widely used for simple engineering problems assuming there is equilibrium of the temperature fields and heat transport, with time. Steady-state condition: The steady-state heat equation for a volume that contains a heat source (the inhomogeneous case), is the Poisson's equation: where u is the temperature, k is the thermal conductivity and q the heat-flux density of the source. In electrostatics, this is equivalent to the case where the space under consideration contains an electrical charge. The steady-state heat equation without a heat source within the volume (the homogeneous case) is the equation in electrostatics for a volume of free space that does not contain a charge. It is described by Laplace's equation: Particle diffusion[edit] Main article: Diffusion equation One can model particle diffusion by an equation involving either: In either case, one uses the heat equation Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation. Brownian motion[edit] Let the stochastic process be the solution of the stochastic differential equation where is the Wiener process (standard Brownian motion). Then the probability density function of is given at any time by which is the solution of the initial value problem where is the Dirac delta function. Schrödinger equation for a free particle[edit] Main article: Schrödinger equation With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way: where i is the imaginary unit, ħ is the reduced Planck's constant, and ψ is the wave function of the particle. This equation is formally similar to the particle diffusion equation, which one obtains through the following transformation: Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wave function at any time through an integral on the wave function at t = 0: Remark: this analogy between quantum mechanics and diffusion is a purely formal one. Physically, the evolution of the wave function satisfying Schrödinger's equation might have an origin other than diffusion. Thermal diffusivity in polymers[edit] A direct practical application of the heat equation, in conjunction with Fourier theory, in spherical coordinates, is the prediction of thermal transfer profiles and the measurement of the thermal diffusivity in polymers (Unsworth and Duarte). This dual theoretical-experimental method is applicable to rubber, various other polymeric materials of practical interest, and microfluids. These authors derived an expression for the temperature at the center of a sphere TC where T0 is the initial temperature of the sphere and TS the temperature at the surface of the sphere, of radius L. This equation has also found applications in protein energy transfer and thermal modeling in biophysics. Further applications[edit] The heat equation arises in the modeling of a number of phenomena and is often used in financial mathematics in the modeling of options. The famous Black–Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The equation describing pressure diffusion in a porous medium is identical in form with the heat equation. Diffusion problems dealing with Dirichlet, Neumann and Robin boundary conditions have closed form analytic solutions (Thambynayagam 2011). The heat equation is also widely used in image analysis (Perona & Malik 1990) and in machine-learning as the driving theory behind scale-space or graph Laplacian methods. The heat equation can be efficiently solved numerically using the implicit Crank–Nicolson method of (Crank & Nicolson 1947). This method can be extended to many of the models with no closed form solution, see for instance (Wilmott, Howison & Dewynne 1995). An abstract form of heat equation on manifolds provides a major approach to the Atiyah–Singer index theorem, and has led to much further work on heat equations in Riemannian geometry. See also[edit] 1. ^ Here we are assuming that the material has constant mass density and heat capacity through space as well as time, although generalizations are given below. 2. ^ In higher dimensions, the divergence theorem is used instead. 3. ^ Stojanovic, Srdjan (2003), " Uniqueness for heat PDE with exponential growth at infinity", Computational Financial Mathematics using MATHEMATICA®: Optimal Trading in Stocks and Options, Springer, pp. 112–114, ISBN 9780817641979 . 4. ^ The Mathworld: Porous Medium Equation and the other related models have solutions with finite wave propagation speed. 5. ^ Juan Luis Vazquez (2006-12-28), The Porous Medium Equation: Mathematical Theory, Oxford University Press, USA, ISBN 0-19-856903-3  6. ^ Note that the units of u must be selected in a manner compatible with those of q. Thus instead of being for thermodynamic temperature (Kelvin - K), units of u should be J/L. 7. ^ "EXACT". Exact Analytical Conduction Toolbox. University of Nebraska. January 2013. Retrieved 24 January 2015.  8. ^ The Green's Function Library contains a variety of fundamental solutions to the heat equation. • Crank, J.; Nicolson, P. (1947), "A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type", Proceedings of the Cambridge Philosophical Society, 43: 50–67, Bibcode:1947PCPS...43...50C, doi:10.1017/S0305004100023197  • Einstein, Albert (1905), "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen", Annalen der Physik, 322 (8): 549–560, Bibcode:1905AnP...322..549E, doi:10.1002/andp.19053220806  • Evans, L.C. (1998), Partial Differential Equations, American Mathematical Society, ISBN 0-8218-0772-2  • Cole, K.D.; Beck, J.V.; Haji-Sheikh, A.; Litkouhi, B. (2011), Heat Conduction Using Green's Functions (2nd ed.), CRC Press, ISBN 978-1-43-981354-6  • John, Fritz (1991), Partial Differential Equations (4th ed.), Springer, ISBN 978-0-387-90609-6  • Wilmott, P.; Howison, S.; Dewynne, J. (1995), The Mathematics of Financial Derivatives:A Student Introduction, Cambridge University Press  • Carslaw, H. S.; Jaeger, J. C. (1959), Conduction of Heat in Solids (2nd ed.), Oxford University Press, ISBN 978-0-19-853368-9  • Thambynayagam, R. K. M. (2011), The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill Professional, ISBN 978-0-07-175184-1  • Perona, P; Malik, J. (1990), "Scale-Space and Edge Detection Using Anisotropic Diffusion", IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (7): 629–639, doi:10.1109/34.56205  • Unsworth, J.; Duarte, F. J. (1979), "Heat diffusion in a solid sphere and Fourier Theory", Am. J. Phys., 47 (11): 891–893, Bibcode:1979AmJPh..47..981U, doi:10.1119/1.11601  External links[edit]
d3b2f37d8e99683c
A Single 3N-Dimensional Universe: Splitting vs. Decoherence A common way of viewing Everettian quantum mechanics is to say that in an act of measurement, the universe splits into two. There is a world in which the electron has x-spin up, the pointer points to “x-spin up,” and we believe the electron has x-spin up. There is another world in which the electron has x-spin down, the pointer points to “x-spin down,” and we believe the electron has x-spin down. This is why Everettian quantum mechanics is often called “the many worlds interpretation.” Because the contrary pointer readings exist in different universes, no one notices that both are read. This way of interpreting Everettian quantum mechanics raises many metaphysical difficulties. Does the pointer itself split in two? Or are there two numerically distinct pointers? If the whole universe splits into two, doesn’t this wildly violate conservation laws? There is now twice as much energy and momentum in the universe than there was just before the measurement. How plausible is it to say that the entire universe splits? Although this “splitting universes” reading of Everett is popular (Deutsch 1985 speaks this way in describing Everett’s view, a reading originally due to Bryce Dewitt), fortunately, a less puzzling interpretation has been developed. This idea is to read Everett’s theory as he originally intended. Fundamentally, there is no splitting, only the evolution of the wave function according to the Shrödinger dynamics. To make this consistent with experience, it must be the case that there are in the quantum state branches corresponding to what we observe. However, as, for example, David Wallace has argued (2003, 2010), we need not view these branches -indeed, the branching process itself- as fundamental. Rather, these many branches or many worlds are patterns in the one universal quantum state that emerge as the result of its evolution. Wallace, building on work by Simon Saunders (1993), argues that there is a kind of dynamical process; the technical name for this process is “decoherence,” that can ground the emergence of quasi-classical branches within the quantum state. Decoherence is a process that involves an interaction between two systems (one of which may be regarded as a system and the other its environment) in which distinct components of the quantum state come to evolve independently of one another. That this occurs is the result of the wave function’s Hamiltonian, the kind of system it is. A wave function that (due to the kind of state it started out in and the Shrödinger dynamics) exhibits decoherence will enter into states capable of representation as a sum of noninteracting terms in particular basis (e.g., a position basis). When this happens, the system’s dynamics will appear classical from the perspective of the individual branches. Note the facts about the quantum state decohering are not built into the fundamental laws. Rather, this is an accidental fact depending on the kind of state our universe started out in. The existence of these quasi-classical states is not a fundamental fact either, but something that emerges from the complex behavior of the fundamental state. The sense in which there are many worlds in this way of understanding Everettian quantum mechanics is therefore not the same as it is on the more naive approach already described. Fundamentally there is just one universe evolving according to the Schrödinger equation (or whatever is its relativistically appropriate analog). However, because of the special way this one world evolves, and in particular because parts of this world do not interfere with each other and can each on their own ground the existence of quasi-classical macro-objects that look like individual universes, it is correct in this sense to say (nonfundamentally) there are many worlds. As metaphysicians, we are interested in the question of what the world is fundamentally like according to quantum mechanics. Some have argued that the answer these accounts give us (setting aside Bohmian mechanics for the moment) is that fundamentally all one needs to believe in is the wave function. What is the wave function? It is something that as we have already stated may be described as a field on configuration space, a space where each point can be taken to correspond to a configuration of particles, a space that has 3N dimensions where N is the number of particles. So, fundamentally, according to these versions of quantum mechanics (orthodox quantum mechanics, Everettian quantum mechanics, spontaneous collapse theories), all there is fundamentally is a wave function, a field in a high-dimensional configuration space. The view that the wave function is a fundamental object and a real, physical field on configuration space is today referred to as “wave function realism.” The view that such a wave function is everything there is fundamentally is wave function monism. To understand wave function monism, it will be helpful to see how it represents the space on which the wave function is spread. We call this space “configuration space,” as is the norm. However, note that on the view just described, this is not an apt name because what is supposed to be fundamental on this view is the wave function, not particles. So, although the points in this space might correspond in a sense to particle configurations, what this space is fundamentally is not a space of particle configurations. Likewise, although we’ve represented the number of dimensions configuration space has as depending on the number N of particles in a system, this space’s dimensionality should not really be construed as dependent on the number of particles in a system. Nevertheless, the wave function monist need not be an eliminativist about particles. As we have seen, for example, in the Everettian approach, wave function monists can allow that there are particles, derivative entities that emerge out of the decoherent behavior of the wave function over time. Wave function monists favoring other solutions to the measurement problem can also allow that there are particles in this derivative sense. But the reason the configuration space on which the wave function is spread has the number of dimensions it does is not, in the final analysis, that there are particles. This is rather a brute fact about the wave function, and this in turn is what grounds the number of particles there are. The Wave Function: Essays on the Metaphysics of Quantum Mechanics. Edited by Alyssa Ney and David Z Albert (pgs. 33-34, 36-37). Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
ad17a9714b8b12cd
Take the 2-minute tour × I'm an undergraduate mathematics student trying to understand some quantum mechanics, but I'm having a hard time understanding what is the status of the Schrödinger equation. In some places I've read that it's just a postulate. At least, that's how I interpret e.g. the following quote: (from the Wikipedia entry on the Schrödinger equation) However, some places seem to derive the Schrödinger equation: just search for "derivation of Schrödinger equation" in google. This motivates the question in the title: Is the Schrödinger equation derived or postulated? If it is derived, then just how is it derived, and from what principles? If it is postulated, then it surely came out of somewhere. Something like "in these special cases it can be derived, and then we postulate it works in general". Or maybe not? Thanks in advance, and please bear with my physical ignorance. share|improve this question 2 Answers 2 The issue is that the assumptions are fluid, so there aren't axioms that are agreed upon. Of course Schrödinger didn't just wake up with the Schrödinger equation in his head, he had a reasoning, but the assumptions in that reasoning were the old quantum theory and the de Broglie relation, along with the Hamiltonian idea that mechanics is the limit of wave-motion. These ideas are now best thought of as derived from postulating quantum mechanics underneath, and taking the classical limit with leading semi-classical corrections. So while it is historically correct that the semi-classical knowledge essentially uniquely determined the Schrödinger equation, it is not strictly logically correct, since the thing that is derived is more fundamental than the things used to derive it. This is a common thing in physics--- you use approximate laws to arrive at new laws that are more fundamental. It is also the reason that one must have a sketch of the historical development in mind to arrive at the most fundamental theory, otherwise you will have no clue how the fundamental theory was arrived at or why it is true. share|improve this answer The Schrödinger equation is postulated. Any source that claims to "derive" it is actually motivating it. The best discussion of this that I'm aware of this is in Shankar, Chapter 4 ("The Postulates -- a General Discussion"). Shankar presents a table of four postulates of Quantum Mechanics, which each given as a parallel to classical postulates from Hamiltonian dynamics. Postulate II says that the dynamical variables x and p of Hamiltonian dynamics are replaced by Hermitian operators $\hat X$ and $\hat P$. In the X-basis, these have the action $\hat X\psi = \psi (x)$ and $\hat P\psi = -i\hbar\frac{d\psi}{dx}$. Any composite variable in Hamiltonian dynamics can be built out of x and p as $\omega(x,p)$. This is replaced by a Hermitian operator $\hat \Omega(\hat X,\hat P)$ with the exact same functional form. Postulate IV says that Hamilton's equations are replaced by the Schrödinger equation. The classical Hamiltonian retains its functional form, with x replaced by $\hat X$ and p replaced by $\hat P$. NB: Shankar doesn't discuss this, but Dirac does. The particular form of $\hat X$ and $\hat P$ can be derived from their commutation relation. In classical dynamics, x and p have the Poisson Bracket {x,p} = 1. In Quantum Mechanics, you can replace this with the commutation relation $[\hat X, \hat P] = i\hbar$. What Shankar calls Postulate II can be derived from this. So you could use that as your fundamental postulate if you prefer. Summary: the Schrödinger equation didn't just come from nowhere historically. It's a relatively obvious thing to try. Mathematically, there isn't anything more fundamental in the theory that you could use to derive it. share|improve this answer Your Answer
734a5d5dd1db6dd7
Take the 2-minute tour × The uncertainty principle (UP) comes up in engineering and physics, but it is a mathematical idea. An old text describes it as "reciprocal spreading." If $f$ is a well-behaved function, the UP might be expressed as $W(f)W(\hat{f}) \geq k$, where $k$ is some constant. If $g$ is a Gaussian, we get equality, i.e., $W(g)W(\hat{g}) = k$. My question is this. At least in Fourier analysis, the Gaussian is sort of a minimum in the above sense. Are there any real-world problems for which this is a solution? Even in EE I don't think "optimality" of the Gaussian with respect to the UP is ever used. Thanks for any thoughts. share|improve this question A concise statement of the UP, with equality in the case of a Gaussian as an exercise, is in Nievergelt, Wavelets Made Easy, p.236, in case my notation obscures the question. –  daniel Oct 24 '11 at 17:02 Hm, maybe I'm missing something but en.wikipedia.org/wiki/Fourier_transform#Uncertainty_principle looks like it says the UP is something else. –  user12014 Oct 25 '11 at 3:40 Probably the simplest expression of it is given by Linus Pauling in General Chemistry , p.83. dt*dv>=k. I took liberties with the formulation of the idea and it is context-dependent. The expressions used in Nievergelt involve weighted functions of f and its FT. There is a survey in J Fourier Analysis, Nov 3, 1997. –  daniel Oct 25 '11 at 9:38 The Wiki article accords with Nievergelt. In words, the more diffuse a function is in the frequency domain, the more focused in the frequency domain and v.v. Perhaps I should have used W(Ff). Hope this clariifies. –  daniel Oct 25 '11 at 9:50 I'm not sure I understand the question, but is en.wikipedia.org/wiki/Gabor_transform relevant? –  endolith Oct 27 '11 at 0:28 1 Answer 1 up vote 1 down vote accepted Assuming you are looking for a "real-world" application of the mininum-uncertainty property of the Gaussian, I might have one answer: In quantum mechanics, Gaussians are used to create minimum-uncertainty wavefunctions which are solutions of the Schrödinger equation. The minimum-uncertainty solutions are useful in constructing what are known as coherent states. See here: share|improve this answer I do not pretend to completely understand the physics but clearly you are right, "minimal uncertainty" (in the sense of my question) is associated with the coherent states described in the article, so this is right on point. Thanks! –  daniel Jun 22 '12 at 23:13 Your Answer
583e15b321caa724
Will the real god God please stand up? There are many reasons why I’m not retired, but one of the bigger ones is that I haven’t figured out yet how to get at least a quarter (if not a dollar bill) from every person who’s ever asked me how I can believe in “a god or gods” in an age of “science” and “reason”. The question is usually sincere rather than an attempt to troll, but either way, the wording alone is enough to reveal where things are headed, and the ensuing discussions have been nothing if not utterly predictable. In virtually every case the underlying narrative was based on a handful of fashionable just-so stories, none of which appeared to have ever been questioned. Back in days of yore, I was told, bucolic ancients looked out on a universe resplendent with mysteries they could neither understand nor predict, yet depended on for their survival. For all its dependable seasons and regularities, the universe visited floods, fires, and other tragedies on them as often as it yielded its bounty. In their attempt to understand why and find a just order to it all, they attributed these mysteries to the capricious activities of spirits called “gods” who were like us in every respect, except that they were disembodied and endowed with vast magical powers over various parts of the natural order. As the rise of science rolled back these mysteries with rational explanations, such gods were no longer needed to account for them. Eventually, the faiths based on them were rendered superfluous, and thus did Science triumph over religion (note the capital “S” and lower-case “r”). There are so many things wrong with this it’s difficult to know where to begin. Perhaps the best way to unpack this mess is to start with the origins of the God of Classical Theism on which the Abrahamic religions are founded. These cover the professed religious beliefs of well over half of humanity and roughly 80% of North America and account for virtually every instance of the above narrative I’ve ever personally witnessed.1 Contrary to widespread belief, Classical Theism as a formal system of thought didn’t originate with Christianity or Judaism, nor was it an attempt to explain any mystery of the natural world (which makes it quite telling that the God that eventually emerged from that tradition bore a striking similarity to the uniquely monotheistic God of the Old Testament that the Israelites had been worshipping via revelation for nearly a millennium). The seminal theological question never was “is there a god?”—it is, and always has been, “why is there something rather than nothing?” In the Fifth Century BC, the Greek philosopher Parmenides formulated an axiom that was later Latinized as ex nihilo nihil fit (“out of nothing comes nothing”). Unless you believe in magic this is as straightforward as axioms get, and for nearly 2500 years no thinker of any repute has seriously challenged it. [At least not until the present day, when a handful of metaphysically illiterate atheist physicists decided that philosophy is “dead” because it hasn’t kept up with their profession, and gave themselves permission to redefine the word “nothing” and make Magic a sub-discipline of physics. But that’s a topic for another day.] This, in turn, raised other issues. Parmenides went on to argue that change and differentiation must be illusory, for to change, he said, is for something to cease to exist in one state and begin to exist in another. Because that would require things to come from nothing, and disappear back into it, he considered it absurd. And yet, change is every bit as indisputable a fact of life as existence itself. What are we to make of these two realities, and how they relate to each other? For the next one or two centuries, philosophers of different schools argued these questions, some emphasizing the primacy of change, and others the primacy of the unchanging unity of things. The first true leap forward came circa the mid-Fourth Century BC when Aristotle published his Metaphysics. Aristotle argued that the apparent tension between being and becoming can be accounted for if we differentiate between the actual state of existence of real-world things (or substances) and their innate potentialities for existing in different ones (later Scholastic thinkers denoted these respectively as acts and potencies). Change occurs when the active potencies of one substance causally instantiate outcomes from the passive potencies of another via four types of causality—Their material constituents (material causality), their essential form and identifying properties (formal causality), their direct physical interactions (efficient causality), and their directedness toward ends (final causality). For instance, we could say that the motion of massive objects reflects their mass and other properties (material and formal causes), and the forces they interact with (efficient causes). Aristotle would also say that they fall to the ground when dropped because the earth is their natural resting place (final causality). Similar ideas were developed by Plato, and by the Stoics and Neoplatonists after him, and eventually brought to fruition by medieval Scholastic philosophers and theologians of the Christian, Jewish, and Islamic traditions. Various schools of thought were represented in each, but most if not all, eventually converged on some combination of the following axioms; 1)   The universe is contingent. Its essential nature, or form (and that of everything in it) is separate from its existence. [e.g. - We can meaningfully conceptualize horses and unicorns without regard to whether there are any.] 2)   The universe is causally interconnected. The acts and potencies of its physical constituents are interrelated in rationally consistent ways. 3)   The universe evolves. Per 2), its actual state of existence changes from moment to moment in dependable ways. [e.g. - Seeds grow into trees, objects fall toward a gravitational source, etc.] As such, science is a meaningful endeavor that gives us real, grounded knowledge about the way the world is. 4)   Potencies may be active powers or passive capacities for change, and the events that unfold from their activity may be (formal terms again) essentially ordered, or accidentally ordered (dependent on, or independent of the continuing activity of their cause/s). [e.g. - You have the active power to father children, and your kids will continue to exist whether you continue fathering behavior or not (accidentally ordered events). A guitar has the passive power to make music by actualizing the passive power of air to produce sound, but only if it is played by a musician, and the music will exist only while the guitar is being played (essentially ordered events). 5)   Purely passive potentialities cannot self-realize—they must be instantiated (made actual) by something else that is actual. [e.g. - wood has the passive potentiality to burn, but only if it's exposed to an actual source of heat. An infinitely long chain of stationary railroad cars (or one connected in a loop) cannot move, even though each car is connected to one that can pull it. There must be a least one engine with the active potency for inducing motion.] 6)   The universe's actualities and potentialities are a mix of active powers and passive possibilities. [e.g. - A locomotive has the active power to pull a train of cars with passive potentials for motion, but also has other passive dependencies, such as the need for an engineer; you have the active power to walk or run, but not to continue living without food and water; etc.] 7)   As persons with active and passive potencies of our own, we are rational, freely choosing, intentional agents. As such, our observations and thoughts can, and do, give us reliable knowledge of the universe. From these (particularly the concept of essentially-ordered causality), they concluded that there must exist something that is pure act—the ground of all being and empowered possibility, with no passive potentialities or dependencies (Davies, 2004; Feser, 2010; 2014). Furthermore, this pure act must be; a)   Eternal - Not within, or in any way constrained by time or space. b)   Unchanging – Not evolving per any passive potencies susceptible to influences external to itself. c)   Simple - A substantial, or essential unity without parts or differing properties of the sort possessed by physical things. d)   Omnipotent - Unlimited in active powers. e)   Omniscient - Present in, and aware of, all that is. f)   Possessing both intellect and will, and as such, is the ground of all personhood (as opposed to being "a" person). g)   The intentional cause of everything else that is, and thus, the objective source of the meaning, value, and purpose of things. Aristotle referred to this pure act as the Unmoved Mover. Christian, Jewish, and Islamic philosophers recognized Him as the God of Classical Theism who appears in the Bible and Quran. How these conclusions were reached, and how this timeless, changeless God is related to the Christian Trinity and His portrayal in the pages of both Scriptures, would fill numerous posts and is beyond our scope today. But before we proceed, a few comments are in order. First, it’s widely believed that Aristotle’s metaphysics is dependent on his outdated physics, and therefore no longer relevant today. In his 2014 debate with William Lane Craig, atheist physicist Sean Carroll spoke for many when he addressed transcendent causality and the universe (Carroll & Craig, 2014) stating that, [T]he way physics is known to work these days is in terms of patterns, unbreakable rules, laws of nature... There is no need for any extra metaphysical baggage, like transcendent causes, on top of that. It’s precisely the wrong way to think about how the fundamental reality works.” All of this is either false or grossly misleading. In modern analytic philosophy, Aristotelian/Scholastic concepts of ontology and causality are every bit as active a field of study as they’ve ever been (e.g. - Martin, 1997; Davies, 2004; Feser, 2014; 2015; Oderberg, 2008, etc. and sources cited therein). There are, of course, differing schools of thought on them, and their relationship to the sciences is actively debated. Some lean toward a deep interrelationship between physics and these metaphysical ideas. Others such as Edward Feser (2010; 2014; 2015) argue that the two are entirely separate realms. Aron and I fall somewhere in the middle. [For more, see Aron’s entire series of posts on Fundamental Reality.] While it is true that modern physics treats causality differently than Aristotle and the Scholastics did (e.g. - the notions of material and formal causes are largely redundant in physics and not really needed), clearly the two realms of thought speak to the same underlying realities and even share some common language. The very “patterns, unbreakable rules, laws of nature” Carroll speaks of inherently imply an underlying unity which not only makes physics possible but fits the terms act and potency beautifully. Potentials, for instance, are a regularly recurring theme in physics, and the fact that equations of motion can be derived from them also bears a striking similarity to the Aristotelian notion of final causality. The dynamics of a falling mass can be differentially specified in terms of a static gravitational potential, but a Scholastic would say that the mass falls to earth because that’s its natural resting place. The ideas being expressed here aren’t as different as many suppose. Another common misconception is that final causality involves teleology. In fact, it’s about directedness as much as purpose or design, if not more, and applies to inanimate objects as well as living things. It’s not a huge leap to see directedness in the way static potentials lead to equations of motion. These Aristotelian concepts are less rigorously developed of course, but conceptually at least, they substantially overlap their counterparts in physics, which implies at least some unity between the two. But at the same time, as we saw in my last post, the fact that there are numerous ontic interpretations of QM alone should give us pause before assuming that one of these realms is entirely supervenient on the other. In any event, wherever one falls on this spectrum, the one thing that isn’t true is that "our metaphysics must follow our physics". Nor is that “what the word metaphysics means" as Carroll claims. Aristotle’s Metaphysics was so named because he wrote that book after he wrote his Physics, not because the former is in any less foundational than the latter, or entirely supervenient on it (in Greek, the root meta is equivalent to the Latin post, meaning “after”). Second, it’s worth noting that this argument, which is known as the cosmological argument, is widely misunderstood. In popular writings, particularly those of its critics, it’s almost always presented as an argument for a historical creation event based on accidentally-ordered temporal chains of causality when in fact, it’s based entirely on essentially-ordered, or simultaneous causality.2 The traditional example given by St. Thomas Aquinas and other Scholastics is that of someone pushing a ball with a stick. The passive potency of the ball for rolling motion is realized only while it is being pushed by the stick’s passive potency for doing so, which in turn is realized only while the one wielding it is exercising his/her active potency for wielding it to push objects. The entire causal chain is simultaneous in the present moment and has nothing whatsoever to do with any cause or causes that may have existed even a few seconds prior. In fact, Aquinas, who developed the argument better than anyone else in history, famously believed that it wasn’t possible to demonstrate that the universe had a temporally-ordered causal beginning. He believed it did because Scripture said so, but he felt that observation and philosophical arguments alone couldn’t demonstrate that. Today, of course, Carroll’s dismissal of transcendent causes notwithstanding, the evidence for a beginning is considerable and whether they admit it or not, a source of dismay for Atheists. Aquinas’ claims to the contrary are relevant here, only to the extent that they emphasize that time-ordered causality plays no role in traditional cosmological arguments. Furthermore, in the writings of Aristotle and the Scholastics, the term move denotes change in general, not just change of location as we understand it. To them, changes in any property—including say, color, temperature, or even a beginning of existence—would be considered “movement”. Interestingly, Carroll misses this subtlety as well. In his book The Big Picture (2017) he claims that modern physics renders Aristotle’s unmoved mover meaningless because per special relativity, inertial reference frames do not distinguish between stationary objects and those moving at constant velocities. [It’s odd that Carroll misunderstands so many of these concepts as completely, and chronically, as he does. Unlike many scientists these days, he has a background in philosophy (having minored in it as an undergraduate) and is known for his thoughtfulness and attention to detail with metaphysical topics. He’s repeatedly, and rightly, called out many of his colleagues for their Philistine recklessness in these areas and with philosophy in general. If anyone should know better, it would be him.] Finally, it should also be noted that the history of thought on God’s nature isn’t quite as monolithic as I perhaps made it sound. In recent years, for instance, some theologians and philosophers of religion have questioned the notions of God as grounded personhood (as opposed to personality), His simplicity, and the claim that He’s timeless and unchanging. God, it’s argued, cannot be meaningfully omniscient and loving, as He’s presented in the Bible and Quran, unless He has attributes that manifest in a personality, not unlike ours, and He in some sense experiences time (although opinions as to whether His time maps onto the spacetime of our experience, and if so, how). This school of thought, referred to by some as theistic personalism, has been particularly popular among advocates of presentism (the so-called “A-Theory” of time). It’s more notable advocates include Richard Swineburne, Alvin Plantinga, J.P. Moreland, and William Lane Craig. Theistic personalism is a relatively late development in the history of Classical Theism and hasn’t gained widespread acceptance among theologians and philosophers of religion (Davies, 2004). The traditional arguments for the simplicity and timelessness of the God of Classical Theism as presented above are formidable and well-supported not only by metaphysics but the Abrahamic Scriptures as well. The apparent difficulties presented by a timeless God in changing history are not as difficult as they may seem at first blush either. Once we realize that if God is omnipresent throughout His created space-time, and interacting with it at every point according to His Will, He will appear to change from the standpoint of time-bound creatures like us, much the way a static landscape appears to change to the passengers of a car driving through it. Dispensing with all this simply to bring God more in line with our experience adds layers of arbitrary, and unnecessary metaphysical complexity that cry out for Occam’s Razor. As if that weren’t enough, it runs badly afoul of physics as well. The presentism that it most naturally fits has numerous issues, not the least of which are the difficulties of reconciling it with the Lorentz boost. While it is possible to make presentism work in a relativistic framework (Copan & Craig, 2004), the match ain’t exactly made in Heaven and IMHO at least, creates far more problems than it solves. Nevertheless, theistic personalism does have its place in modern theological discourse, and it has been ably defended by its proponents (Moreland & Craig, 2003). There… Now that all the fine print is out of the way, let’s return to our seven-axiom argument for the existence of God. At this point, several things should be readily apparent. 1)   God is not “a god” When Atheists (or more commonly, New Atheists) speak of "a god or gods" what they invariably have in mind are demigods—minor deities of the sort one finds in ancient mythologies. These are the disembodied space and time-bound magical spirits central to their narrative. In The God Delusion Richard Dawkins (2008) states that, The problem with this is obvious—the “gods” he names bear no resemblance whatsoever to the God of Classical Theism. In Greek mythology, Zeus had a family tree like us. He was the child of the Titans Chronos and Rhea, and they were, in turn, descended from the primordial Greek deities (Wikipedia, 2016). Like the rest of the Greek pantheon, not only was he a time-bound spirit, he was earth-bound as well and "lived" at a physical location (Mt. Olympus). In fact, as often as not, such demigods were deified human rulers. Case in point, the Akkadian ruler/gods Gilgamesh and Naram-Sin who respectively ruled during the late Third and early Second Millennia BC (Armstrong, 2015). God on the other hand (note the capital “G”), is the ground of all being and personhood. He is neither space and time-bound nor an instantiation—there is no general class of things called "grounds of all being" of which He can be said to be one example among many. The very claim that there could be more than one such ground is inherently self-contradictory. It’s no accident that the Abrahamic religions are all monotheistic. And as the creator of all else that exists—including the very space-time manifold whose geometry is, per general relativity, related to the mass-energy and momentum it contains—calling Him a demigod amounts to claiming that He's bound by His own creation, and dependent on it for His existence. That, my friends, is patently absurd. Saying that God is "a god” isn't merely wrong, it's a category error. Interestingly, the distinction we find today between the anthropomorphic personified God of televangelist’s sermons and children’s picture Bibles, and the God of Classical Theism was every bit as true in Aristotle’s day as well. Then, as now, philosophers distinguished between Everyman’s bearded, gray haired Zeus who threw thunderbolts from Mt. Olympus, and the classical theistic "Zeus" (or more properly, Greek primordial God) of formal thought. If this were the 4th Century BC, New Atheists like Dawkins would be out in front of the Athens Peripatetic school in togas beating their well-inflated chests about "a zeus or zeus'es," and Aristotle would be the one biting his tongue and doing whatever could be done to educate them. Some things never change… ;-) 2)   God is not a hypothesis Science doesn’t deal in “facts” (at least not as most people understand that word). More correctly, it deals with data. One begins with reproducible measurements of some observed phenomena (e.g. – the power density spectrum of the cosmic microwave background, or tracks emerging from particle collisions in a cloud chamber). One or more hypotheses are formed to account for them, and the most viable of these are developed into formal theories from which the outcomes of further, yet untested observations can be made. In the case of physics, this generally means a set of differential equations and boundary conditions, a Lie algebra that embodies an expected symmetry, or the like. Failure of a theory’s predictions is its null hypothesis and counts as evidence against it. If further experiments yield the predicted outcomes, confidence in the theory grows, and if not, suspicion does. In this sense, hypotheses that make no testable predictions cannot meaningfully be called scientific.3 Enter our axioms 1) through 7). Though all are based on observation, and scientific illustrations could be given for them, they cannot be called “data” in any scientifically meaningful sense. How does one create a “dataset” to quantify concepts like act and potency, and use it to validate a ground of all being and personhood and the contingency of the universe? What they are, is a set of metaphysical axioms about the underlying ontic nature of the universe, and God (again, note the capital “G”) isn’t a hypothesis we postulate to account for them—He’s a formally reasoned conclusion derived from them. Alright, before anyone blows a gasket, let me be clear about what I mean. No, I am not saying that the existence of God can be logically/mathematically proven. If it were that easy Atheism wouldn’t be a worldview worth discussing, and its proponents wouldn’t include some of the finest minds in history. What I am saying is that it’s a different sort of argument than the tradition data -> hypothesis -> test methodology science relies on. To claim that there’s no evidence for “a god or gods” is like claiming that there’s no “evidence” for “an equation or equations” called the Mean Value Theorem of Calculus. The Mean Value Theorem isn’t a hypothesis—it’s a formal proof that begins with certain axioms (e.g. – a continuous manifold, monotonic everywhere differentiable functions, etc.). The extent to which one accepts those axioms is the extent to which one accepts the conclusion. Likewise, to reject that conclusion is to reject the axioms it begins with. Which brings us to the next point… 3)   Atheism is not a null hypothesis Finally, we arrive at New Atheism's most beloved get-out-of-jail-free card—the belief that it's merely the rejection of Theism, and as such, a null hypothesis that needs no defense. Sam Harris (2008) minces no words when he states that, A New Atheist friend and colleague once put it to me even more starkly on social media, Clever, aren’t we? Don't state your claims directly, frame them as a rejection of someone else's… then conveniently excuse yourself from any responsibility for a proper defense of them, and set the standard of proof however high it needs to be to protect you, infinitely if necessary. Sleight of hand like this isn’t just bread-and-butter for New Atheists of course. Creationists and climate change skeptics rely heavily on it as well. Denial... it ain't just a river in Egypt anymore! ;-) To be fair, this would be valid if we were postulating the activity of demigods in the created order as one possible explanation for some phenomenon. If my fishing buddy insists that the nibble I just had was a trout, I’m under no obligation to defend my skepticism when we both know the pond is full of bass and catfish as well. The burden of proof is on him to produce evidence for his “trout” theory as opposed to a bass or catfish one. But as we’ve seen, that’s not what’s happening here. We aren’t offering any “god hypothesis” to account for something in the natural world, whether it be trout in a pond or anything else. We’re formally demonstrating that a set of metaphysical axioms requires His existence. Atheists like Harris and my friend aren’t rejecting belief in “a god or gods”—they’re rejecting the metaphysical axioms that lead to the God of Classical Theism. That cannot be done in a vacuum without committing oneself to some, or all, of the following counter-axioms; 8)   The universe is a brute fact. Science may reveal its countless subtleties and underlying unities, but ultimately it just has the contingent features it does rather than an infinite number of other possibilities. There is no reason why... it just is that way. 9)   Per 8), the beginning of the universe's existence (13.73 billion years ago) is also a brute fact. There is no reason why... it just created itself from nothing. 10)   There is no such thing as causality—only events unfolding in certain ordered ways. “Causality” is just a concept we use to describe the appearance of mechanism between bits of stuff (what I referred to above as "interactions"), but ultimately those events are, to use David Hume's term, "loose and separate." They have no inherent relationship to each other. 11)   Matter does not actually possess any inherent properties or essential natures of the sort that could be described in terms of essence or potency (as I defined them above). Reality is ultimately just "bits of stuff" mechanically interacting according to mathematical laws expressed in terms of parameters that give the appearance of such. [“Um, ‘interactions’ and ‘laws’…? Didn’t you just say in 10) that…?” “Silence Dorothy! Pay no attention to that man behind the curtain...!”] 12)   The rationality of the laws of nature—that those "loose and separate" events between bits of stuff happen to unfold according to what physicist Eugene Wigner called "the unreasonable effectiveness of mathematics—is also a brute fact. There is no reason why... it just is that way. 13)   "Loose and separately" ordered bits of stuff are blind, and as such the universe ascribes no objective value or purpose. Everything in it, including us, is a byproduct of random, meaningless accidents—what Richard Dawkins called "blind, pitiless indifference" (Dawkins, 1996). Thus, morality is either nihilistic or entirely subjective. 14)   Alternately, if objectively normative moral values do exist—yours, mine, or anyone else's—then they too are brute facts. There is no reason why... they just are what they are. [“But my goodness gracious… isn’t it marvelous how nicely they align with mine…?”] 15)   Consciousness and personhood are illusory. To again use David Hume's term, we're just "bundles of percepts" in bodies made up of bits of stuff behaving according to deterministic laws. [“Um, ‘deterministic’…? Didn’t you say in 10) that…?” “Silence Dorothy! Pay no attention to that man behind the curtain...!”] "You" or "I" are concepts we use to describe our experience of the neural activity in our brains, and how it affects our perceptions and behaviors. Beyond that, we are no more “persons” in the sense of being freely empowered, intentional, and possessing rational agency than an email server is (analytic philosophers refer to this viewpoint as eliminative materialism). 16)   Though we are accidentally evolved "bundles of percepts," our perceptions and reasoned thoughts are reliable sources of knowledge of the deepest inner workings of the universe and ourselves. Notice that these aren’t mere “rejections” of anything. Like 1) through 7), they’re positive metaphysical assertions about the ontic foundations of the universe, and as such, they have rational consequences. We can reject belief in mythological demigods, invisible dragons, or the Flying Spaghetti Monster if we like. But we cannot reject the God of Classical Theism without committing ourselves to a fully developed and properly defended philosophy of Materialism, any more than we can reject belief in light without accepting belief in darkness—which is of course, precisely what every Atheist philosopher of any repute in history has labored to produce. David Hume, Friedrich Nietzsche, Bertrand Russell, Antony Flew… these and many other luminaries devoted their lives to producing materialistic philosophies of nature, mind, and ethics based on some, or all of the above counter-axioms, and published countless influential works in the process (Hume, 2000; 2017; Nietzsche, 2000; Russell, 1967; 2017; Flew, 2005 to name a few). According to Harris and my friend, all of that was a waste of time—what these and countless other luminaries should’ve been doing, was belittling televangelists and suicide bombers on social media and in TED talks to like-minded audiences. They, of course, knew better. Those who insist that there’s no evidence for “a god or gods” are merely demonstrating that they don’t even understand the question, much less have a properly thought out answer for it.4 A reporter once presented the late Samuel Shenton, then president of the Flat Earth Society, with a photograph of earth taken by the Apollo 13 astronauts from roughly 150,000 miles distance. Shenton stared long and hard at it, after which he began to nod. “Yes,” he finally said… “It is easy to see how the untrained eye could be fooled by that picture!” Well-trained eyes are becoming an increasingly important part of the modern intellectual landscape… particularly in secular communities that wear their claims to “reason” and objectivity like golden tiaras. But as I said in my last post, if our only tool is a hammer then sooner or later everything will look like a nail. Though some would deny it (sincerely, I believe), to many in these communities, science is no longer a discipline. It has become a religion in its own right—Scientism, the sacred Oracle whose mighty outstretched hand no question of earth, sky, heart, or soul can elude. Its practitioners are no longer experts, but authorities—high priests of the goddess Reason, whose metaphysical pronouncements are every bit as authoritative as the theistic fundamentalist dogmas they, often rightly, deride. Nowhere is this more true than with physics—a discipline that not only knocks on the door of many metaphysical questions, but immerses itself in counterintuitive mysteries that at times seem almost magical, and higher mathematics that to the guy on the street are every bit as arcane as ancient hieroglyphics… so much so that a term has even been coined for it: physics envy. And human nature being what it is, once a scientist has been elevated from mere expertise to the august status of High Priest, he/she becomes an authority not only in their own field, but in beer brewing, Elizabethan poetry, personal lubricants, or any other topic for which it’s their whim to have an opinion. Anymore, hardly a week goes by that I don’t see yet another news story extolling Stephen Hawking’s latest complaints and/or warnings about society, international politics, or the impacts of technology on the future of humanity—as though expertise in quantum cosmology qualifies him to speak to any of those topics. [That isn’t Hawking’s fault of course. Scientists rarely ask for the deification so glibly bestowed on them by a credulous public.] Unfortunately, there’s one big problem with all this… Like it or not, science is a discipline, not an Oracle. A powerful discipline to be sure, and one that has rolled back the mysteries of the universe like no other, but a discipline nonetheless, and for damn sure, no more either. And like all other disciplines, it is, and always will be, but one tool among many. As such, it lends itself to many but not all questions, and the experts who wield it are fallen mortals every bit as subject to their own hopes, fears, and human limitations as we are. It’s the height of naivete and outright hubris to pretend that we can cleanse it of our own limitations and treat it like a magic wand that can answer every question, meet every moral, spiritual, and existential need, and endow our existence with purpose… and we pay a steep price when we do. The philosopher Alfred North Whitehead once said, True that. 1)   I’m not knowledgeable enough about Hinduism to speak with any authority about it, but its concept of Brahman as the Absolute appears to bear some similarity to the God of the Abrahamic traditions. If so, then including it in this list would raise the tally of humanity that embraces some version of the God of Classical Theism to nearly 70%. 2)   There is one version of the cosmological argument that does presume that the universe had a beginning—the Kalam cosmological argument whose most notable proponent is William Lane Craig. However, it isn’t based on time-ordered causality either. The Kalam argument differs from the traditional one in that it contains two additional premises: Whatever begins to exist has a cause; and that this cause must be transcendent because (per Parmenides) the universe cannot efficiently cause itself. But like the traditional cosmological argument, it takes this cause to be essentially-ordered as well. 3)   Interestingly, some physicists and philosophers are now beginning to question this, and their reasons are rather surprising. In recent years, multiverse models based on eternal inflation and the so-called string landscape have in the eyes of many physicists, become “the best game in town” for a “theory of everything” that could potentially resolve many issues in physics and cosmology. The inflationary framework accounts beautifully for a few cosmological conundrums that would otherwise be inexplicable (e.g. – the “flatness" problem, and the uniformity of the cosmic microwave background). But in the absence of a viable candidate for the inflaton (as of this writing), the scalar potential/s in inflationary models are flexible enough that for the time being at least, validating the framework has largely proven to be a whack-a-mole exercise. For every model that’s been observationally ruled out, more have sprung up. Likewise, while string theory has led to much progress in many areas, it has also proven excessively flexible—so much so that since its inception more than 40 years ago, it has yet to make a single testable prediction. Furthermore, the scale on which it’s real nuts and bolts are expected to reveal themselves requires testing at energies that will never be accessible to us (Woit, 2007). For all intents and purposes, this renders string landscape multiverse models virtually untestable… even in principle. However, in spite of these problems, they offer two really big carrots that in addition to their other strengths have proven irresistible to many physicists: a) In conjunction with anthropic arguments, they currently offer the only workable explanations of fine tuning that are based solely on physics; and b) Though vulnerable to some formidable arguments that the universe had a beginning, eternal inflation does offer at least some hope for avoiding a creation event. Technically, “eternal” inflation is a reference to future-eternal inflation and thus a bit of a misnomer. A past-eternal universe would run afoul of the BVG theorem; there are a few ways to get around it, although the best of them are contrived to say the least. The bottom line is that as of this writing, the string landscape/eternal inflation multiverse offers the only path forward for cosmology that doesn’t smack of a Creator. Given the theistic alternatives, it’s little wonder that many atheist physicists (most notably Sean Carroll) are willing to accept these limitations and argue that it’s time to dispense with testable predictions in science. If a theory is “elegant” (in their view) and at least fits observation, it is de-facto true. Likewise, it also comes as no surprise that many of the strongest opponents of this movement (known as Post-Empiricism) are Christians like George Ellis (Ellis & Silk, 2014). Ironically, the shoe is now on the other foot. Atheists who for so long have (often rightly) accused religious believers of clinging to comfortable dogmas without evidence, are now the ones insisting that science should be divorced from it. When their backs are against the wall (and to their credit IMHO), they prove to be every bit as mortal as people of faith. And like us, they cherish their worldviews enough that they’ll occasionally struggle for their preservation even to a fault. 4)   Antony Flew is a particularly telling case in point. Often referred to as the Father of 20th Century Atheism, he was arguably the most important Atheist philosopher of his age. His seminal work God and Philosophy (2005), which was originally published in 1966, almost single-handedly shaped the direction of Atheist thought and scholarship during his lifetime. Shortly before his death in 2010, he shocked the secular world when he set aside his life’s work and said that based on reason and evidence, he could no longer deny the existence of God (Flew & Varghese, 2008). Flew didn’t conclude with a God who is personal, as in the Bible and Quran, nor did he embrace any major religion. But his God did bear a striking similarity to the God of Classical Theism, and he gave a particularly deferential hat-tip to… Christianity. Needless to say, this dealt New Atheists a narcissistic injury which they still haven’t recovered from to this day. The reaction was immediate, and what one would expect. Despite his life’s work, Flew was promptly branded an apostate to the True Faith and excommunicated. Dawkins (2008) fumed about his “tergiversation” (as though using the biggest and most impressive word he could find in a crossword puzzle would somehow convert bullshit into a valid argument). Others resorted to smear campaigns (up to and including accusing him of senility), and intellectual cross-burnings that would make even the flock of Westboro Baptist Church blush. The one thing that was not, and to this day has not been produced, is a properly researched and soundly defended critique of his stance. Perhaps New Atheists are as offended by religion as they are because they have more in common with blindly dogmatic religious fundamentalists than they’re prepared to admit. Armstrong, K. (2015). Fields of blood: Religion and the history of violence. Anchor; Reprint edition (September 15, 2015). ISBN-10: 0307946967; ISBN-13: 978-0307946966. Available online at www.amazon.com/Fields-Blood-Religion-History-Violence/dp/0307946967/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1499969508&sr=8-1. Accessed July 13, 2017. Carroll, S. (2017). The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. Dutton; Reprint edition (May 16, 2017). ISBN-10: 1101984252; ISBN-13: 978-1101984253. Available online at www.amazon.com/Big-Picture-Origins-Meaning-Universe/dp/1101984252/ref=mt_paperback?_encoding=UTF8&me=. Accessed July 16, 2017. Carroll S. & W. L. Craig. (2014). “God and Cosmology: The Existence of God in Light of Contemporary Cosmology”. New Orleans Baptist Theological Seminary, New Orleans, LA – March 2014. Transcript available at www.reasonablefaith.org/god-and-cosmology-the-existence-of-god-in-light-of-contemporary-cosmology. Accessed July 14, 2017. Copan, P., & Craig, W. L. (2004). Creation out of nothing: A biblical, philosophical, and scientific exploration. Baker Academic (June 1, 2004). ISBN-10: 0801027330; ISBN-13: 978-0801027338. Available online at www.amazon.com/Creation-out-Nothing-Philosophical-Exploration/dp/0801027330/ref=sr_1_1?ie=UTF8&qid=1500324234&sr=8-1&keywords=Creation+out+of+nothing. Accessed July 17, 2017. Dawkins, R. (1996). River out of Eden: A Darwinian view of life. Basic Books; Reprint edition. ISBN-10: 0465069908; ISBN-13: 978-0465069903. Available online at www.amazon.com/River-Out-Eden-Darwinian-Science/dp/0465069908/ref=sr_1_1?s=books&ie=UTF8&qid=1499814281&sr=1-1&keywords=river+out+of+eden. Accessed July 11, 2017. Davies, B. (2004). An Introduction to the Philosophy of Religion. Oxford University Press; 3 edition (January 8, 2004). ISBN-10: 0199263477; ISBN-13: 978-0199263479. Available online at www.amazon.com/Introduction-Philosophy-Religion-Brian-Davies/dp/0199263477/ref=sr_1_3?s=books&ie=UTF8&qid=1499974934&sr=1-3&keywords=brian+davies. Accessed July 13, 2017. Dawkins, R. (2008). The God Delusion. Mariner Books; Reprint edition, ISBN-10: 0618918248; ISBN-13: 978-0618918249. Available online at www.amazon.com/God-Delusion-Richard-Dawkins/dp/0618918248/ref=sr_1_1_title_1_pap?s=books&ie=UTF8&qid=1408044395&sr=1-1&keywords=god+delusion. Accessed July 11, 2017. Ellis, G., & Silk, J. (2014). Scientific method: Defend the integrity of physics. Nature, 516(7531). Available online at www.nature.com/news/scientific-method-defend-the-integrity-of-physics-1.16535. Accessed July 11, 2017. Feser, E. (2010). The last superstition: A refutation of the new atheism. St. Augustines Press; 1St Edition edition (December 10, 2010). ISBN-10: 1587314525; ISBN-13: 978-1587314520. Available online at www.amazon.com/Last-Superstition-Refutation-New-Atheism/dp/1587314525/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1499974707&sr=8-1. Accessed July 13, 2017. Feser, E. (2014). Scholastic Metaphysics. Editions Scholasticae. ISBN-10: 3868385444; ISBN-13: 978-3868385441. Available online at www.amazon.com/Scholastic-Metaphysics-Contemporary-Introduction-Scholasticae/dp/3868385444/ref=sr_1_3?ie=UTF8&qid=1464406953&sr=8-3&keywords=feser. Accessed July 13, 2017. Feser, E. (2015). Neo-scholastic Essays. St. Augustines Press; 1 edition (June 30, 2015). ISBN-10: 1587315580; ISBN-13: 978-1587315589 Available online at www.amazon.com/Neo-Scholastic-Essays-Edward-Feser/dp/1587315580/ref=pd_sim_14_3?ie=UTF8&dpID=51vOUR5k8eL&dpSrc=sims&preST=_AC_UL320_SR214%2C320_&psc=1&refRID=MP3S70WMRDF7N9VQNPMA. Accessed July 15, 2017. Flew, A. (2005). God and philosophy. Prometheus Books (April 8, 2005). ISBN-10: 1591023300; ISBN-13: 978-1591023302. Available online at www.amazon.com/God-Philosophy-Antony-Flew/dp/1591023300/ref=mt_paperback?_encoding=UTF8&me=. Accessed July 21, 2017. Flew, A., & Varghese, R. A. (2008). There is a God. HarperOne; unknown edition (November 4, 2008). ISBN-10: 0061335304; ISBN-13: 978-0061335303. Available online at www.amazon.com/There-God-Notorious-Atheist-Changed/dp/0061335304/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1500661815&sr=1-1. Accessed July 21, 2017. Harris, S. (2008). Letter to a Christian Nation. Vintage; Reprint edition. ISBN-10: 0307278778; ISBN-13: 978-0307278777. Available online at www.amazon.com/Letter-Christian-Nation-Sam-Harris/dp/0307278778/ref=sr_1_6_title_1_pap?s=books&ie=UTF8&qid=1408131913&sr=1-6&keywords=sam+harris. Accessed July 11, 2017. Hume, D. (2017). An enquiry concerning human understanding. CreateSpace Independent Publishing Platform (July 1, 2017). ISBN-10: 1461180198; ISBN-13: 978-1461180197. Available online at www.amazon.com/Enquiry-Concerning-Human-Understanding/dp/1461180198/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1500660885&sr=8-3. Accessed July 21, 2017. Hume, D. (2000). A treatise of human nature. Oxford University Press; New Ed edition (February 24, 2000). ISBN-10: 0198751729; ISBN-13: 978-0198751724. Available online at www.amazon.com/Treatise-Human-Nature-Oxford-Philosophical/dp/0198751729/ref=sr_1_13?ie=UTF8&qid=1500660885&sr=8-13&keywords=david+hume. Accessed July 21, 2017. Martin, C. F. (1997). Thomas Aquinas God and Explanations. Edinburgh University Press; 1 edition (June 30, 1997). ISBN-10: 0748609016; ISBN-13: 978-0748609017. Available online at www.amazon.com/Thomas-Aquinas-Explanations-Christopher-Martin/dp/0748609016/ref=sr_1_1?ie=UTF8&qid=1500142014&sr=8-1&keywords=Thomas+Aquinas+God+and+Explanations. Accessed July 15, 2017. Moreland, J. P., & Craig, W. L. (2003). Philosophical foundations for a Christian worldview. IVP Academic; unknown edition (April 28, 2003). ISBN-10: 0830826947; ISBN-13: 978-0830826940. Available online at www.amazon.com/Philosophical-Foundations-Christian-Worldview-Moreland/dp/0830826947/ref=sr_1_1?ie=UTF8&qid=1460753368&sr=8-1&keywords=Philosophical+Foundations+for+a+Christian+Worldview. Accessed July 17, 2017. Nietzsche, F. (2000). Basic Writings of Nietzsche. Modern Library; Modern Library edition (November 28, 2000). ISBN-10: 0679783393; ISBN-13: 978-0679783398. Available online at www.amazon.com/Writings-Nietzsche-Modern-Library-Classics/dp/0679783393/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1500661604&sr=1-7. Accessed July 21, 2017. Oderberg, D. S. (2008). Real essentialism. Routledge; 1 edition (January 30, 2008). ISBN-10: 041587212X; ISBN-13: 978-0415872126. Available online at www.amazon.com/Essentialism-Routledge-Studies-Contemporary-Philosophy/dp/041587212X/ref=sr_1_2?ie=UTF8&qid=1500142092&sr=8-2&keywords=Thomas+Aquinas+God+and+Explanations. Accessed July 15, 2017. Russell, B. (1967). History of Western Philosophy. Simon & Schuster/Touchstone (October 30, 1967). ISBN-10: 0671201581; ISBN-13: 978-0671201586. Available online at www.amazon.com/History-Western-Philosophy-Bertrand-Russell/dp/0671201581/ref=mt_paperback?_encoding=UTF8&me=. Accessed July 21, 2017. Russell, B. (2017). The problems of philosophy. CreateSpace Independent Publishing Platform (April 21, 2017). ISBN-10: 1545507635; ISBN-13: 978-1545507636. Available online at www.amazon.com/Problems-Philosophy-Bertrand-Russell/dp/1545507635/ref=mt_paperback?_encoding=UTF8&me=. Accessed July 21, 2017. Wikipedia. (2016). Greek primordial deities. Available online at en.wikipedia.org/wiki/Greek_primordial_deities. Accessed July 17, 2017. Woit, P. (2007). Not even wrong: The failure of string theory and the search for unity in physical law. Basic Books; Reprint edition (September 4, 2007). ISBN-10: 0465092764; ISBN-13: 978-0465092765. Available online at www.amazon.com/Not-Even-Wrong-Failure-Physical/dp/0465092764/ref=mt_paperback?_encoding=UTF8&me=. Accessed July 16, 2017. Posted in Metaphysics, Theology | 29 Comments Random Linkiness ♦  Why Our Children Don't Think There Are Moral Facts. And now for some bookmarks on the new laptop: Posted in Links | 10 Comments "The Glimmer"  “The Glimmer” by Aron C. Wall We used to think it was about individual particles, and that the measuring apparatus was the same as before. Then the War began, and it became personal; each of us needing to roll dice, and consult the railway tables, To see what new home could become our destination; or whether, if we could stay, home would remain home. Some of us—secretly, but entangled with the rest— dropped the sun on our new-home's enemies. These histories decohere, but you can still see the glimmer in each raindrop, winking and saying: “You won't believe it, not even if someone tells you, but life is quantum, not classical.” —Holy Saturday, 2017 Posted in Poetry | Leave a comment Interpreting the Quantum World II: What Does It Mean? In the first installment of this series, we immersed ourselves in the quantum realm that lies beneath our everyday experience and discovered a universe that bears little resemblance to it. Instead of the solid, unambiguously well-behaved objects we’re familiar with, we encountered a unitary framework (\hat U) in which everything (including our own bodies!) is ultimately made of ethereal “waves of probability” wandering through immense configuration spaces along paths deterministically guided by well-formed differential equations and boundary conditions, and acquiring the properties we find in them as they rattle through a random pinball machine of collisions with “measurement” events (\hat M). This is all very elegant—even beautiful… but what does it mean? When my fiancé falls asleep in my arms, her tender touch, the warmth of her breath on my neck, and the fragrance of her hair hardly seem like mere probabilities being kicked around by dice-playing measurements. The refreshing drink of sparkling citrus water I just took doesn’t taste like one either. What is it that gives fire to this ethereal quantum realm? How does the Lord God breathe life into our probabilistic dust and bring about the classical universe of our daily lives (Gen. 2:7)? We finished by distilling our search for answers down to three fundamental dilemmas: 2)  What really happens when a deterministic, well-behaved \hat U evolution of the universe runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other? 3)  If counterfactual definiteness is an ill-formed concept, why are we always left with only one experienced outcome? Why don’t we experience entangled realities? Physicists, philosophers, and theologians have been tearing their hair out over these questions for almost a century, and numerous interpretations have been suggested (more than you might imagine!). Most attempt to deal with 2), and from there, back out answers to 1) and 3). All deserve their own series of posts, so let me apologize in advance for only having time to do a fly-by of the more important ones here. In what follows I’ll give an overview of the most viable, and well-received interpretations to date, and finish with my own take on all of it. So, without further ado, here are our final contestants… This is the traditionally accepted answer given by the founding fathers of QM. According to Copenhagen, the cutting edge of reality is in \hat M. The world we exist in is contained entirely in our observations. Per the Born Rule, these are irreducibly probabilistic and non-local,and result in classically describable measurements. The wave function and its unitary history \hat U are mere mathematical artifices we use to describe the conditions under which such observations are made, and have no ontic reality of their own. In this sense, Copenhagen has been called a subjective, or epistemic interpretation because it makes our observations the measure of all things (pun intended :-) ). Although few physicists and philosophers would agree, some of the more radical takes on it have gone as far as to suggest that consciousness is the ultimate source of the reality we observe. Even so, few Copenhagen advocates believe the world doesn’t exist apart from us. The tree that falls in the woods does exist whether we’re there to see and hear it or not. What they would argue is that counterfactuals regarding the tree’s properties and those of whatever caused it to fall don’t instantiate if we don’t observe them. If no one sees the tree fall or experiences any downstream consequence of its having done so, then the question of whether it has or not is irreducibly ambiguous and we’re free to make assumptions about it. Several objections to Copenhagen have been raised. The idea that ontic reality resides entirely in non-local, phenomenologically discrete “collapse” events that are immune to further unpacking is unsatisfying. Science is supposed to explain things, not explain them away. It’s also difficult to see how irreducibly random \hat M events could be prepared by a rational, deterministic \hat U evolution if the wave function has no ontic existence of its own. To many physicists, philosophers, and theologians, this is less a statement about the nature or reality than the universe’s way of telling us that we haven’t turned over enough stones yet, and may not even be on the right path. For their part, Copenhagen advocates rightly point out that this is precisely what our experiments tell us—no more, no less. If the formalism correctly predicts experimental outcomes, they say, metaphysical questions like these are beside the point, if not flat-out ill-formed, and our physics and philosophy should be strictly instrumentalist—a stance for which physicist David Mermin coined the phrase “shut up and calculate". Many Worlds One response to Copenhagen is that if \hat U seems to be as rational and deterministic as the very real classical physics of our experience, perhaps that’s because it is. But that raises another set of questions. As we’ve seen, nothing about \hat U allows us to grant special status to any of the eigenstates associated with observable operators. If not, then we’re left with no reason other than statistical probability to consider any one outcome of an \hat M event to be any more privileged than another. Counterfactuals to what we don’t observe should have the same ontic status as those we do. If so, then why do our experiments seem to result in discrete irreducibly random and non-local “collapse” events with only one outcome? According to the Many Worlds (MWI) interpretation, they don’t. The universe is comprised of one ontically real, and deterministic wave function described by \hat U that’s local (in the sense of being free of “spooky-action-at-a-distance”) and there’s no need for hidden variables to explain \hat M events. What we experience as wave function “collapse” is a result of various parts of this universal wave function separating from each other as they evolve. Entangled states within it will be entangled while their superposed components remain in phase with each other. If/when they interact with some larger environment within it, they eventually lose their coherence with respect to each other and evolve to a state where they can be described by the wave functions of the individual states. When this happens, the entanglement has (for lack of a better term) “bled out” to a larger portion of the wave function containing the previous entanglement, and the environment it interacted with, and states are said to have decohered. Thus, the wave function of the universe never actually collapses anywhere—it just continues to decohere into the separate histories of previously entangles states that continue with their own \hat U histories, never interacting with each other again. As parts of the same universal wave function, all are equally real, and questions of counterfactual definiteness are ill-formed. The advantages of MWI speak for themselves. From a formal standpoint, a universe grounded on \hat U and decoherence that’s every bit as rational and well-behaved as the classical mechanics it replaced, certainly has advantages over one based on subjective hand grenade \hat M events. It deals nicely with the relativity-violating non-locality and irreducible indeterminacy that plague Copenhagen as well. And for reasons I won’t get into here, it also lends itself nicely to quantum field theory, and Feynmann path integral (“sum over histories”) methods that have proven to be very powerful. But its disadvantages speak just as loudly. For starters, it’s not at all clear that decoherence can fully account for what we directly experience as wave function collapse. Nor is it clear how MWI can make sense of the extremely well-established Born Rule. Does decoherence always lead to separate well-defined histories for every eigenstate associated with every observable that in one way or another participates in the evolution of \hat U? If not, then what meaning can be assigned to probabilities when some states decohere and others don’t. Even if it does, what reasons do we have for expecting that it should obey probabilistic constraints? And of course, we haven’t even gotten to the real elephant in the room yet—the fact that we’re also being asked to believe in the existence of an infinite number of entirely separate universes that we can neither observe, nor verify, even though the strict formalism of QM doesn’t require us to. Physics aside, for those of us who are theists this raises a veritable hornet’s nest of theological issues. As a Christian, what am I to make of the cross and God’s redemptive plan for us in a sandstorm of universes where literally everything happens somewhere to infinite copies of us all? It’s worth noting that some prominent Christian physicists like Don Page embrace MWI, and see in it God’s plan to ultimately gather all of us to Him via one history or another, so that eventually “every knee shall bow, and every tongue confess, and give praise to God (Rom. 14:11). While I understand where they’re coming from, and the belief that God will gather us all to Himself some day is certainly appealing, this strikes me as contrived and poised for Occam’s razor. In the end, despite its advantages, and with all due respect to Hawking and its other proponents, I don’t accept MWI because, to put it bluntly, it’s more than merely unnecessary—it’s bat-shit crazy. According to MWI there is, quite literally, a world out there somewhere in which I, Scott Church (peace be upon me), am a cross-dressing, goat worshipping, tantric massage therapist, with 12” Frederick’s of Hollywood stiletto heels (none of that uppity Victoria’s Secret stuff for me!), and D-cup breast implants… Folks, I am here to tell you… there isn’t enough vodka or LSD anywhere on this lush, verdant earth to make that believable! Whatever else may be said about this veil of tears we call Life, rest assured that indeterministic hand grenade \hat M events and “spooky action at a distance” are infinitely easier to take seriously. :D De Broglie–Bohm Bat-shit crazy aside, another approach would be to try separating \hat U and \hat M from each other completely. If they aren’t playing together at all, we don’t have to worry about whether they’ll share their toys. Without pressing that analogy too far, this is the basic idea behind the De Broglie-Bohm interpretation (DBB). According to DBB, particles do have definite locations and momentums, and these are subject to hidden variables. \hat U is real and deterministic, and per the Schrödinger equation governs the evolution of a guiding, or pilot wave function that exists separate from particles themselves. This wave function is non-local and does not collapse. For lack of a better word, particles “surf” on it, and \hat M events acting on them are governed by the local hidden variables. In our non-local singlet example from Part I, the two electrons were sent off with spin-state box lunches. All of this results in a formalism like that of classical thermodynamics, but with predictions that look much like the Copenhagen interpretation. In DBB the Born Rule is an added hypothesis rather than a consequence of the inherent wave nature of particles. There is no particle/wave duality issue of course because particles and the wave function remain separate, and Bell’s inequalities are accounted for by the non-locality of the latter. There’s a naturalness to DBB that resolves much of the “weirdness” that has plagued other interpretations of QM. But it hasn’t been well-received. The non-locality of its pilot wave \hat U still raises the whole “spooky action at a distance” issue that physicists and philosophers alike are fundamentally averse to. Separating \hat U from \hat M and duct-taping them together with hidden variables adds layers of complexity not present in other interpretations, and runs afoul of all the issues raised by the Kochen-Specker Theorem. We have to wonder whether our good friend Occam and his trusty razor shouldn’t be invited to this party. And like MWI, it’s brutally deterministic, and as such, subject to all the philosophical and theological nightmares that go along with that, not to mention our direct existential experience as freely choosing people. Even so, for a variety of reasons (including theories of a “sub-quantum realm” where hidden variables can also hide from Kochen-Specker) it’s enjoying a bit of a revival and does have its rightful place among the contenders. Consistent Histories As we’ve seen, the biggest challenge QM presents is getting \hat U and \hat M to play together nicely. Most interpretations try to achieve this by denying the ontological reality of one, and somehow rolling it up into the other. What if we denied the individual reality of both, and rolled them up into a larger ontic reality described by an expanded QM formalism? Loosely speaking, Consistent Histories (or Decoherent Histories) attempts to do this by generalizing Copenhagen to a quantum cosmology framework in which the universe evolves along the most internally consistent and probable histories available to it. Like Copenhagen, CH asserts that the wave function is just a mathematical construct that has no ontic reality of its own. Where it parts company is in its assertion that \hat U represents the wave function of the entire universe, and it never collapses. What we refer to as “collapse” occurs when some parts of it decohere with respect to larger parts leading, it is said, to macroscopically irreversible outcomes that are subject to the ordinary additive rules of classical probability. In CH, the potential outcomes of any observation (and thus, the possible histories the universe might follow) are classified by how homogeneous and consistent they are. This, it’s said, is what makes some of them more probable than others. A homogeneous history is one that can be described by a unique temporal sequence of single-outcome propositions, such as, “I woke up” > “I got out of bed” > “I showered” … Those that cannot be, such as ones that include statements like “I walked to the grocery store or drove there” are not. These events can be represented by a projection operator \hat P from which histories can be built, and the more internally consistent they are (per criteria contained in a class operator \hat P), the more probable they are. Thus, in CH \hat M is not a fundamental QM concept. The evolution of the universe is described by a mathematical construct, \hat U that can be interpreted as decohering into the most internally consistent (and therefore probable) homogeneous histories possible for it to. The paths these histories take give us a framework in which some sets of classical questions can be meaningfully asked, and other can’t. Returning to our electron singlet example, CH advocates would maintain that the wave function wasn’t entangled in any real physical sense. Rather, there are two internally consistent histories for the prepared electrons that could have emerged a spin measurement: Down/Up, and Up/Down. Down/Up/Up/Down isn’t a meaningful state, so it’s meaningless to say that the universe was “in” it. Rather, when the entire state of us/laboratory/observation is accounted for, we will find that the universe followed the history that was most consistent for that. There is no need to discriminate between observer and observed. Decoherence is enough to account for the whole history, so \hat M is a superfluous construct. CH advocates claim that it offers a cleaner, and less paradoxical interpretation of QM and classical effects than its competitors, and a logical framework for discriminating boundaries between classical and quantum phenomena. But it too has its issues. It’s not at all clear that decoherence is as macroscopically irreversible as it’s claimed to be, or that by itself it can fully account for our experience of \hat M. It also requires additional projection and class operator constructs not required by other interpretations, and these cannot be formulated to any degree practical enough to yield a complete theory. Objective Collapse Theories Of course, we could just make our peace with \hat U and \hat M. Objective collapse, or quantum mechanical spontaneous localization (QMSL) models maintain that the universe reflects both because the wave function is ontologically real, and “measurements” (perhaps interactions is a better term here) really do collapse it. According to QMSL theories, the wave function is non-local, but collapses locally in a random manner (hence, the “spontaneous localization”), or when some physical threshold is crossed. Either way, observers play no special role in the collapse itself. There are several variations on this theme. The Ghirardi–Rimini–Weber theory for instance, emphasizes random collapse of the wave function to highly probably stable states. Roger Penrose has proposed another theory based on energy thresholds. Particles have mass-energy that, per general relativity, will make tiny "dents" in the fabric of space-time. According to Penrose, in the entangled states of their wave function these will superpose as well, and there will be an associated energy difference that entangled states can only sustain up to a critical threshold energy difference (which he theorizes to be on the order of one Planck mass). When they decohere to a point where this threshold is exceeded, the wave function collapses per the Born Rule in the usual manner (Penrose, 2016). For our purposes, this interpretation pretty much speaks for itself and so do its advantages. Its disadvantages lie chiefly in how we understand and formally handle the collapse itself. For instance, it’s not clear this can be done mathematically without violating conservation of energy or bringing new, as-yet undiscovered physics to the game. In the QMSL theories that have been presented to date, if energy is conserved the collapse doesn’t happen completely, and we end up with left-over “tails” in the final wave function state that are difficult to make sense of with respect to the Born Rule. It has also proven difficult to render the collapse compliant with special relativity without creating divergences in probability densities (in other words, blowing up the wave function). Various QMSL theories have handled issues like this in differing ways, some more successfully than others, and research in his area continues. But to date, none of the theories on the table offers a slam-dunk. The other problem QMSL theories face is a lack of experimental verification. Random collapse theories like Ghirardi–Rimini–Weber could be verified if the spontaneous collapse of a single particle could be detected. But these are thought to be extremely rare, and to date, none have been observed. However, several tests for QMSL theories have been proposed (e.g. Marshall et al., 2003; Pepper et al., 2012; or Weaver et al., 2016 to name a few), and with luck, we’ll know more about them in the next decade or so (Penrose, 2016). There are many other interpretations of QM, some of which are more far-fetched than others. But the ones we’ve covered today are arguably the most viable, and as such, the most researched. As we’ve seen, all have their strengths and weaknesses. Personally, I lean toward Objective Collapse scenarios. It’s hard to believe that something as well-constrained and mathematically coherent as \hat U isn’t ontologically real. Especially when the alternative bedrock reality being offered is \hat M, which is haphazard and difficult to separate from our own subjective consciousness (the latter in particular smacks of solipsism, which has never been a very compelling, or widely-accepted point of view). Of the competing alternatives that would agree about \hat U, MWI is probably the strongest contender. But for reasons that by now should be disturbingly clear, it’s far easier for me to accept a non-local wave function collapse than its take on \hat M. Call me unscientific if you will, but ivory towers alone will never be enough to convince me that I have a cross-dressing, goat-worshipping, voluptuous doppelganger somewhere that no one can ever observe. Other interpretations don’t fare much better. Most complicate matters unnecessarily and/or deal with the collapse in ways that render \hat M deterministic. It’s been said that if your only tool is a hammer, eventually everything is going to look like a nail. It seems to me that such interpretations are compelling to many because they’re tidy. Physicists and philosophers adore tidy! Simple, deterministic models with well-defined differential equations and boundary conditions give them a fulcrum point where they feel safe, and from which they think they can move the world. This is fine for what it’s worth of course. Few would dispute the successes our tidy, well-formed theories have given us. But if the history of science has taught us anything, it’s that nature isn’t as enamored with tidiness as we are. Virtually all our investigations of QM tell us that indeterminism cannot be fully exorcized from \hat M, and the term “collapse” fits it perfectly. Outside the laboratory, everything we know about the world tells us we are conscious beings made in the image of our Creator. We are self-aware, intentional, and capable of making free choices—none of which is consistent with tidy determinism. Anyone who disputes that is welcome to come up with a differential equation and a self-contained set of data and boundary conditions that required me to decide on a breakfast sandwich rather than oatmeal this morning… and then collect their Nobel and Templeton prizes and retire to the lecture circuit. The bottom line is that we live in a universe that presents us with \hat U and \hat M. As far as I’m concerned, if the shoe fits I see no reason not to wear it. Yes, QMSL theories have their issues. But compared to other interpretations, its problems are formalistic ones of the sort I suspect will be dealt with when we’re closer to a viable theory of quantum gravity. When we as students are ready, our teacher will come. Until then, as Einstein once said, the world should be made as simple as possible, but no simpler. When I was in graduate school my thesis advisor used to say that when people can’t agree on the answer to some question one of two things is always true: Either there isn’t enough evidence to answer the question definitively, or we’re asking the wrong question. Perhaps many of our QM headaches have proven as stubborn as they are because we’re doing exactly that… asking the wrong questions. One possible case in point… physicists have traditionally considered \hat U to be sacrosanct—the one thing that above all others, only the worst apostates would ever dare to question. Atheist physicist Sean Carroll has gone so far as to claim that it proves the universe is past-eternal, and God couldn’t have created it! [There are numerous problems with that of course, but they’re beyond the scope of this discussion.] However, Roger Penrose is now arguing that we need to do exactly that (fortunately, he’s respected enough in the physics community that he can get away with such challenges to orthodoxy without being dismissed as a crank or heretic). He suggests that if we started with the equivalence principle of general relativity instead, we could formulate a QMSL theory of \hat U and \hat M that would resolve many, if not most QM paradoxes, and this is the basis for his gravitationally-based QMSL theory discussed above. Like its competitors, Penrose’s proposal has challenges of its own, not the least of which are the difficulties that have been encountered in producing a rigorous formulation \hat M along these lines. But of everything I’ve seen so far, I find it to be particularly promising! But then again, maybe the deepest secrets of the universe are beyond us. Isaac Newton once said, As scientists, we press on, collecting our shiny pebbles and shells on the shore of the great ocean with humility and reverence as he did. But it would be the height of hubris for us to presume that there’s no limit to how much of it we can wrap our minds around before we have any idea what’s beyond the horizon. As J. B. S. Haldane once said, "My own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose." (Haldane, 1928) Who knows? Perhaps he was right. God has chosen to reveal many of His thoughts to us. In His infinite grace, I imagine He’ll open our eyes to many more. But He certainly isn’t under any obligation to reveal them all, nor do we have any reason to presume that we could handle it if He did. But of course, only time will tell. One final thing… Astute readers may have noticed one big elephant in the room that I’ve danced around, but not really addressed yet… relativity. Position, momentum, energy, and time have been a big part of our discussion today… and they’re all inertial frame dependent, and our formal treatment of \hat U and \hat M must account for that. There are versions of the Schrödinger equation that do this—most notably the Dirac and Klein Gordon equations. Both however are semi-classical equations—that is, they dress up the traditional Schrödinger equation in a relativistic evening gown and matching handbag, but without an invitation to the relativity ball. For a ticket to the ball, we need to take QM to the next level… quantum field theory. But these are topics for another day, and I’ve rambled enough already… so once again, stay tuned!  Haldane, J. B. S. (1928). Possible worlds: And other papers. Harper & Bros.; 1st edition (1928). Available online at www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=Possible+worlds%3A+And+other+papers. Accessed May 17, 2017. Marshall, W., Simon, C., Penrose, R., & Bouwmeester, D. (2003). Towards quantum superpositions of a mirror. Physical Review Letters, 91 (13). Available online at journals.aps.org/prl/abstract/10.1103/PhysRevLett.91.130401. Accessed June 9, 2017. Pepper, B., Ghobadi, R., Jeffrey, E., Simon, C., & Bouwmeester, D. (2012). Optomechanical superpositions via nested interferometry. Physical review letters, 109 (2). Available online at journals.aps.org/prl/abstract/10.1103/PhysRevLett.109.023601. Accessed June 9, 2017. Penrose, R. (2016). Fashion, faith, and fantasy in the new physics of the universe. Princeton University Press, Sept. 13, 2016. ISBN: 0691178534; ASIN: B01AMPQTRU. Available online at www.amazon.com/Fashion-Faith-Fantasy-Physics-Universe-ebook/dp/B01AMPQTRU/ref=sr_1_1?ie=UTF8&qid=1495054176&sr=8-1&keywords=penrose. Accessed May 16, 2017. Weaver, M. J., Pepper, B., Luna, F., Buters, F. M., Eerkens, H. J., Welker, G., ... & Bouwmeester, D. (2016). Nested trampoline resonators for optomechanics. Applied Physics Letters, 108 (3). Available online at aip.scitation.org/doi/abs/10.1063/1.4939828. Accessed June 9, 2017. Posted in Metaphysics, Physics | 21 Comments Interpreting the Quantum World I: Measurement & Non-Locality \Delta x\Delta p \geqslant \hbar/2 Putting it all together Posted in Metaphysics, Physics | 9 Comments
96a9abe4b35e90cf
skip to navigation skip to content krypy 2.1.6 Krylov subspace methods for linear systems KryPy gives you an easy-to-use yet flexible interface to Krylov subspace methods for linear algebraic systems. Compared to the implementations in SciPy (or MATLAB), KryPy allows you to supply additional arguments that may help you to tune the solver for the specific problem you want to solve. The additional arguments may also be of interest if you are doing research on Krylov subspace methods. Some features of KryPy are: • Full control of preconditioners - the order of applying preconditioners matters. This is why you can supply two left preconditioners (one of whom implicitly changes the inner product and thus has to be positive definite) and one right preconditioner. Take a look at the arguments M, Ml and Mr. • Get the Arnoldi/Lanczos basis and Hessenberg matrix - you want to extract further information from the generated vectors (e.g. recycling)? Just pass the optional argument store_arnoldi=True. • Explicitly computed residuals on demand - if you do research on Krylov subspace methods or preconditioners, then you sometimes want to know the explicitly computed residual in each iteration (in contrast to an updated residual which can be obtained implicitly). Then you should pass the optional argument explicit_residual=True. • Compute errors - if you have (for research purposes) the exact solution at hand and want to monitor the error in each iteration instead of the residual, you can supply the optional argument exact_solution=x_exact to the LinearSystem. The documentation is hosted at The above convergence history is obtained with the following example where the Gmres method is used to solve the linear system A*x=b with the diagonal matrix A=diag(1e-3,2,...,100) and right hand side b=[1,...,1]. import numpy from krypy.linsys import LinearSystem, Gmres # create linear system and solve b=numpy.ones((100, 1))) sol = Gmres(linear_system) # plot residuals from matplotlib import pyplot Of course, this is just a toy example where you would not use GMRES in practice. KryPy can handle arbitrary large matrices - as long as the (hopefully sparse) matrices and the generated basis of the Krylov subspace fit into your memory. ;) Furthermore, in actual applications, you definitely want to adjust Gmres’ parameters such as the residual tolerance. Help can be optained via Python’s builtin help system. For example, you can use the ? in ipython: from krypy.linsys import Gmres pip / PyPi Simply run pip install krypy. There’s an Ubuntu PPA with packages for Python 2 and Python 3. Installing from source KryPy has the following dependencies: * NumPy * SciPy KryPy is currently maintained by André Gaul. Feel free to contact André. Please submit feature requests and bugs as github issues. KryPy is developed with continuous integration. Current status: To create a new release 1. bump the __version__ number, 2. create a Git tag, $ git tag -a v0.3.1 $ git push --tags 3. upload to PyPi: $ make upload KryPy is free software licensed under the MIT License. KryPy evolved from the PyNosh package (Python framework for nonlinear Schrödinger equations; joint work with Nico Schlömer) which was used for experiments in the following publication: * Preconditioned Recycling Krylov subspace methods for self-adjoint problems, A. Gaul and N. Schlömer, arxiv: 1208.0264, 2012 File Type Py Version Uploaded on Size krypy-2.1.6.tar.gz (md5, pgp) Source 2017-06-06 45KB
761f350079c76904
Sunday, January 31, 2016 The Greatest Guitar Player You Probably Never Heard Of Danny Gatton began his career playing in bands while still a teenager. He began to attract wider interest in the 1970s while playing guitar and banjo for the group Liz Meyer & Friends. He made his name as a performer in the Washington, DC, area during the late 1970s and 1980s, both as a solo performer and with his Redneck Jazz Explosion, in which he traded licks with virtuoso pedal steel player Buddy Emmons over a tight bass-drums rhythm that drew from blues, country, bebop, and rockabilly influences. He also backed Robert Gordon and Roger Miller. He contributed a cover of Apricot Brandy, a song by Elektra Records-supergroup Rhinoceros, to the 1990 compilation album Rubáiyát. Gatton's playing combined musical styles such as jazz, blues and rockabilly in an innovative fashion, and he was known by some as the Telemaster. He was also called the world's greatest unknown guitarist, and The Humbler, from his ability to out-play anyone willing to go up against him in "head-cutting" jam sessions. Amos Garrett, guitar player for Maria Muldaur, gave Gatton the nickname. A photo published in the October 2007 issue of Guitar Player magazine shows Gatton playing in front of a neon sign that says "Victims Wanted." However, he never achieved the commercial success that his talent arguably deserved. His album 88 Elmira Street was up for a 1990 Grammy Award for the song Elmira Street Boogie in the category Best Rock Instrumental Performance, but the award went to Eric Johnson for Cliffs of Dover. His skills were most appreciated by his peers such as Eric Clapton, Willie Nelson, Steve Earle, and his childhood idol Les Paul. During his career, Gatton appeared on stage with guitar heroes such as Alvin Lee and Jimmie Vaughan. Gatton had roomed with Roy Buchanan in Nashville, Tennessee in the mid '60s and they became frequent jamming partners, according to Guitar Player magazine's October 2007 issue. He also performed with old teenage friend Jack Casady and Jorma Kaukonen (from Jefferson Airplane and Hot Tuna) as Jack and the Degenerates. Those recordings were never released, but live tapes are in circulation. In 1993, rocker Chris Isaak invited Gatton to record tracks for Isaak's San Francisco Days CD. Reports of where Gatton's playing can be heard on the CD vary, with unconfirmed reports placing him on either Can't Do a Thing (To Stop Me), 5:15 or Beautiful Houses. He usually played a 1953 Fender Telecaster with Joe Barden pickups and Fender Super 250Ls, or Nickel Plated Steel (.010 to .046 with a .015 for the G) strings. (Fender now makes a replica of his heavily customized instrument.) For a slide, Gatton sometimes used a beer bottle or mug. In the March 1989 issue of Guitar Player magazine, he said he preferred to use an Alka-Seltzer bottle or long 6L6 vacuum tube as a slide, but that audiences liked the beer bottle. He did, however, only play slide overhand, citing his earlier training in steel guitar [Guitar Player, March 1989]. Among amplifiers Gatton is known to have used are a 1959 Fender Bassman amp and a heavily customized blackface Fender Vibrolux Reverb]]. Throughout the late 1980s and early 1990s, Danny worked closely with Fender to create his very own signature model guitar – The Danny Gatton Signature Telecaster, released in 1990. Danny Gatton has been described as possessing an extraordinary proficiency on his instrument, "a living treasury of American musical styles." In 2009, John Previti, who played bass guitar with Danny for 18 years stated: "You know, when he played country music, it sounded like all he played was country music. When he played jazz, it sounded like that's all he played, rockabilly, old rock and roll, soul music. You know, he called himself a Whitman sampler of music" Legendary guitarist Steve Vai reckons Danny "comes closer than anyone else to being the best guitar player that ever lived." Accomplished guitar veteran Albert Lee said of Gatton: "Here’s a guy who’s got it all.” Thursday, January 28, 2016 Wednesday, January 27, 2016 Space Roar Today's trivia comes from scientists at NASA's Goddard Space Flight Center when they sent a machine called ARCADE into space on a giant balloon in 2009 for the purpose of searching for radiation from the universe's earliest stars. ARCADE is an acronym for Absolute Radiometer for Cosmology, Astrophysics, and Diffuse Emission and carried seven sensors that picked up electromagnetic radiation like radio waves. The plan was to lift it far enough up to prevent the Earth's atmosphere from interfering with the data collection. Then, the finely-tuned instrument could detect faint radio signals from ancient stars. But there was a problem. ARCADE detected a huge amount of radio noise — six times louder than scientists had predicted — which has since come to be known as the "space roar." There are some theories, but we don't know for sure what's causing it. Please don't misunderstand. Space isn't roaring in any way that our ears could hear. But there are objects in the universe — including some galaxies — which emit radio waves via synchrotron radiation. Monday, January 25, 2016 Summer Of Love For some reason I always get it mixed up with the summer of 1968, but the Summer of Love was a social phenomenon that occurred during the summer of 1967. Those who keep track of such things say that the party got kicked off when as many as 100,000 people converged on the Haight-Ashbury neighborhood of San Francisco, initiating a major cultural and political shift in the US. Although hippies also gathered in major cities across Canada and Europe, San Francisco remained the epicenter of the social earthquake that would come to be known as the Hippie Revolution. San Francisco was rocking in those days and became even more of a melting pot of politics, music, drugs, creativity, and the total lack of sexual and social inhibition than it already was. As the hippie counterculture movement came into public awareness, it caused numerous 'ordinary citizens' to begin questioning how law and order was being administered in an authoritarian society (yes, I know, things haven't changed much). The hippies, sometimes called flower people, were an eclectic group of social misfits. Many were suspicious of the government, rejected consumerist values, and generally opposed the Vietnam War. Others were uninterested in political affairs and preferred to spend their time involved in the aforementioned pursuit of sex, drugs, and music. Inspired by the beatniks of the 1950s, who had flourished in the North Beach area of San Francisco, those who gathered in Haight-Ashbury in 1967 rejected the conformist values of Cold War America. These hippies eschewed the material benefits of modern life and relied on their own wits for food and shelter. Some worked, some stole, some borrowed and others begged but a theme of helpfulness emerged. James Rado and Gerome Ragni were in attendance and absorbed the whole experience; and this became the basis for the musical Hair. Rado recalled, "There was so much excitement in the streets and the parks and the hippie areas, and we thought `If we could transmit this excitement to the stage it would be wonderful....' We hung out with them and went to their Be-Ins [and] let our hair grow. It was very important historically, and if we hadn't written it, there'd not be any examples. You could read about it and see film clips, but you'd never experience it. We thought, 'This is happening in the streets,' and we wanted to bring it to the stage.'" Timothy Leary was there and he proclaimed, "turn on, tune in, drop out", a notion that persisted throughout the Summer of Love. John Phillips of The Mamas & the Papas wrote the song "San Francisco (Be Sure to Wear Flowers in Your Hair)" for his friend Scott McKenzie. It served to promote both the Monterey Pop Festival that Phillips was helping to organize, and to popularize the flower children of San Francisco, who came to epitomize the hippy dream. Released on May 13, 1967, the song was an instant hit. By the week ending July 1, 1967, it reached the number four spot on the Billboard Hot 100 in the United States, where it remained for four consecutive weeks. Meanwhile, the song rose to number one in the United Kingdom and most of Europe. The single is purported to have sold over 7 million copies worldwide. If the sheets on your bed look like this, you might be a hippie. In New York City, an event in Tompkins Square Park in Manhattan on Memorial Day in 1967 sparked the beginning of the summer of love there. During this concert in the park, some police officers asked for the music to be turned down. In response, some in the crowd threw various objects, and thirty-eight police arrests ensued. A debate about the threat of the hippie ensued between Mayor John Lindsay and Police Commissioner Howard Leary. After this event, Allan Katzman, the editor of the East Village Other, predicted that 50,000 hippies would enter the area for the summer. Double that amount - as many as 100,000 young people from around the world - flocked to San Francisco's Haight-Ashbury district, as well as to nearby Berkeley - and to other San Francisco Bay Area cities, to join in a popularized version of the hippie experience. Free food, free drugs, and free love were available in Golden Gate Park, a Free Clinic was established for medical treatment, and a Free Store gave away basic necessities to anyone who needed them. After resigning his tenured position at Harvard, former professor of psychology Timothy Leary became a major advocate for the recreational use of LSD, spreading his beliefs up and down the East Coast. After taking psilocybin, a drug extracted from certain mushrooms that causes effects similar to those of LSD, Leary supported the use of all psychedelics for personal development. He often invited friends as well as the odd graduate student to trip along with him and colleague Richard Alpert. The Merry Pranksters  On the West Coast, author Ken Kesey, a prior volunteer for a CIA-sponsored LSD experiment, also advocated the use of LSD. Shortly after participating, he was inspired to write the bestselling novel One Flew Over the Cuckoo's Nest. Subsequently, after buying an old school bus, painting it with psychedelic graffiti and attracting a group of similarly-minded individuals he dubbed the Merry Pranksters, Kesey and his group traveled across the country, often hosting "acid tests" where they would fill a large container with a diluted low dose form of the drug and give out diplomas to those who passed their test. Along with LSD, marijuana was also used heavily during this period. With the various all-organic movements beginning to expand, this drug was even more appealing than LSD due to the fact that apart from creating a euphoric high, it was all-natural as well. However, as a result, crime rose among users due to the fact that several laws were subsequently enacted to control the use of both drugs. Efforts at repealing oppressive drug laws have been unsuccessful, but drug use has not abated in all these years. In New York, the rock musical Hair, which told the story of the hippie counter-culture and sexual revolution of the 1960s, opened Off-Broadway on 17 October 1967. On September 2, 2007, San Francisco celebrated the 40th anniversary of the Summer of Love by holding numerous events around the region, most of which were attended by some of the original participants and their children. And what a time it was. Historical, histrionic, and sometimes hysterical. Despite the dark shade of anti-war activism, the looming plans of the communists and the neocons, as well as casual drug use, the overall impression was that the circus had come to town and although many a shaved head in the establishment shook in disbelief, a good time was generally had by all. Saturday, January 23, 2016 Public Education Whenever someone publicly questions the government-run school system, they are almost always accused of opposing education itself. This is one reason why state legislators find themselves under pressure to properly fund and cater to the public education establishment. In most places the largest employers are the public school districts. This translates into political power and those in power typically don’t appreciate being questioned. Clearly, many people consider the public education system to be a sacred cow of sorts. But almost none have any concept of the origins, the history, or the goals of public education in America. Few Americans understand that our government-controlled school system was founded upon authoritarian ambitions. State-directed schools find their roots in the Prussian schools of the early 19th Century. In the 1840s, Horace Mann, then secretary of the Massachusetts Board of Education, traveled to Europe to study the Prussian model of public education. He was seeking a way to change what he deemed the “unruly” (meaning independent) children into disciplined citizens. To that end, the Prussian educational system sought to take education out of the hands of family and church with five key goals in mind. It was to create: Obedient workers for the mines. Obedient soldiers for the army. Well-subordinated civil servants to government. Well-subordinated clerks to industry. Citizens who thought alike about major issues. The reasoning behind such a system is easy to understand, since independently educated masses could not be always counted on to submit to their government’s objectives. Tyrants like Prussia’s Frederick William I and France’s Napoleon each used this system to build a powerful, controlling state apparatus. Other despots followed in their footsteps. Educator John Taylor Gatto’s book, “The Underground History of American Education,” describes how the system came to America: “A small number of passionate ideological leaders visited Prussia in the first half of the 19th Century, fell in love with the order, obedience, and efficiency of its educational system and campaigned relentlessly thereafter to bring the Prussian vision to our shores. To do that, children would have to be removed from their parents and inappropriate cultural influences.” The next step was to sell the new system to the American public in the name of equality by convincing each respective state to adopt a compulsory government school system to ensure a uniform education for the masses. The primary goals of this system were not intellectual training but rather conditioning the students for obedience, subordination and collective life. With this bit of historical perspective regarding the origins and stated intentions of public education, it’s much easier to understand why a “free education for all children in public schools” was a key plank of Marx’s Communist Manifesto. To this day, the defenders of state-sponsored education insist that it was implemented at the request of the American people. But this was not necessarily the case. Sheldon Richman, of the Future of Freedom Foundation, explains it this way: As writer Karen DeCoster points out, “What is most disquieting about the public education mindset is that those who believe most strongly in it are convinced that there are “no other” noble alternatives, and that the alternatives that do exist are merely a hindrance to the only real education, that which is provided via the public domain.” The government school system is filled with people who generally go along in order to keep their jobs. They teach from socialist curriculums and some of them believe the BS and some don't. Like most of us, they work within a system founded upon authoritarianism. Generally, the problem with government-run education is the system itself, not the people who work for it.* No people can remain free without being truly educated, but that’s not the same thing as having mere uniformity of thought. *As a caveat I'd like to add that this is not always the case as I have run across a number of educators whom I perceived to be bat-crap crazy. Friday, January 22, 2016 What Fresh Insanity Is This? Whenever the subject of socialism and oppression and personal liberty come up, there's always one in the crowd who claims that they're proud to pay their taxes. I think what they mean to say is that they are relieved they have enough money to pay the government because the alternative results in some pretty extreme legal punishments, like prison and hefty fines. But the thing is, for someone to say they're proud to pay taxes, well, it's really a pathetic, stupid thing to say. Why? Keep reading. 1) Being "proud" that you did something you were FORCED to do is absurd. If you voluntarily contribute to the well-being of your fellow man, great. But it's not charitable--you get no credit for compassion or generosity--if you had no choice in the matter. Taking "pride" in being robbed is loony tunes. We've got what it takes to take what you've got. 2) Unless you completely approve of everything "government" does, then being "proud to pay your taxes" means being proud to pay for things that you oppose. How much of a schizophrenic do you have to be to express pride in funding things you don't even want or like? 3) Regarding the "government" services you want (ignoring how inefficient and wasteful "government" always is) -- why feel "proud" to pay for a service? Is it especially noble and selfless to buy stuff? 4) If you think giving money to politicians constitutes "contributing to society," you're delusional. Getting robbed by a carjacker, or just flushing your money down the toilet, does far less harm to society than giving money to those who would oppress you. If you are "proud to pay taxes," look up "Stockholm Syndrome." You only feel "pride" because you were trained to feel loyalty to your political masters, and to feel good about blindly obeying their "laws" and paying the tribute they call "taxes." That doesn't make you a good person. It makes you a good slave. The same guy who says he's proud to pay his taxes also typically votes for power-mad professional politicians because they "have experience and know what they're doing." Yes, it's true, they've made fleecing the American citizen a political art form. So, pay your taxes and grin like an idiot if you want, but you'll never convince me I am better off giving the government everything I can put back over the course of a year. Thank you very little. Thursday, January 21, 2016 The Dude Abides Finally a religion I can live with. Dudeism claims Kurt Vonnegut, Lao Tzu, and Walt Whitman as prophets, among others, and its primary symbol, pictured to the right, is a cross between the Yin and Yang symbol and a bowling ball. It even has its own version of the Tao Te Ching called The Dude De Ching. How radical is that? The Dude abides. Here's the web site. Wednesday, January 20, 2016 Negative Energy Theoretically, the lowest temperature that can be achieved is absolute zero, exactly 273.15°C, where the motion of all particles stops completely. However, you can never actually cool something to this temperature because, in quantum mechanics, every particle has a minimum energy, called “zero-point energy,” which you cannot go below. Because this theory was first suggested by Stephen Hawking, the particles given off by this effect (the ones that don’t fall into the black hole) are called Hawking Radiation. It was the first accepted theory to unite quantum theory with general relativity, making it Hawking’s greatest scientific achievement to date. Tuesday, January 19, 2016 It's estimated that 55% of Americans don't know that the sun is a star. Confederate Heroes’ Day: Texas This is bound to get some people riled up, but others will swell with pride. You see, it's Confederate Heroes' Day in Texas. Confederate Heroes’ Day commemorates those who died fighting for the Confederate States of America during the American Civil War. An official state holiday in Texas, Confederate Heroes’ Day has fallen annually on January 19—the birthday of Robert E. Lee—since its approval on January 30, 1931. Monday, January 18, 2016 Bohm’s Theory David Bohm "But you don't decide what to do with your life. Thought runs you. Thought, however, gives false info that you are running it, that you are the one who controls thought. Whereas actually thought is the one which controls each one of us." -- David Bohm The standard interpretation of quantum physics assumes that the quantum world is characterized by absolute indeterminism and that quantum systems exist objectively only when they are being measured or observed. David Bohm’s ontological interpretation of quantum theory rejects both these assumptions. Bohm’s theory that quantum events are determined by subtler forces operating at deeper levels of reality ties in with John Eccles’ theory that our minds exist outside the material world and interact with our brains at the quantum level. Paranormal phenomena indicate that our minds can communicate with other minds and affect distant physical systems by nonordinary means. Whether such phenomena can be adequately explained in terms of nonlocality and the quantum vacuum or whether they involve superphysical forces and states of matter as yet unknown to science is still an open question. Quantum theory is generally regarded as one of the most successful scientific theories ever formulated. But while the mathematical description of the quantum world allows the probabilities of experimental results to be calculated with a high degree of accuracy, there is no consensus on what it means in conceptual terms. According to the uncertainty principle, the position and momentum of a subatomic particle cannot be measured simultaneously with an accuracy greater than that set by Planck’s constant. This is because in any measurement a particle must interact with at least one photon, or quantum of energy, which acts both like a particle and like a wave, and disturbs it in an unpredictable and uncontrollable manner. An accurate measurement of the position of an orbiting electron by means of a microscope, for example, requires the use of light of short wavelengths, with the result that a large but unpredictable momentum is transferred to the electron. An accurate measurement of the electron’s momentum, on the other hand, requires light quanta of very low momentum (and therefore long wavelength), which leads to a large angle of diffraction in the lens and a poor definition of the position. According to the conventional interpretation of quantum physics, however, not only is it impossible for us to measure a particle’s position and momentum simultaneously with equal precision, a particle does not possess well-defined properties when it is not interacting with a measuring instrument. Furthermore, the uncertainty principle implies that a particle can never be at rest, but is subject to constant fluctuations even when no measurement is taking place, and these fluctuations are assumed to have no causes at all. In other words, the quantum world is believed to be characterized by absolute indeterminism, intrinsic ambiguity, and irreducible lawlessness. As the late physicist David Bohm (1984, p. 87) put it: "it is assumed that in any particular experiment, the precise result that will be obtained is completely arbitrary in the sense that it has no relationship whatever to anything else that exists in the world or that ever has existed." Bohm (ibid., p. 95) took the view that the abandonment of causality had been too hasty: "it is quite possible that while the quantum theory, and with it the indeterminacy principle, are valid to a very high degree of approximation in a certain domain, they both cease to have relevance in new domains below that in which the current theory is applicable. Thus, the conclusion that there is no deeper level of causally determined motion is just a piece of circular reasoning, since it will follow only if we assume beforehand that no such level exists." Most physicists, however, are content to accept the assumption of absolute chance. Collapsing the wave function A quantum system is represented mathematically by a wave function, which is derived from Schrödinger’s equation. The wave function can be used to calculate the probability of finding a particle at any particular point in space. When a measurement is made, the particle is of course found in only one place, but if the wave function is assumed to provide a complete and literal description of the state of a quantum system - as it is in the conventional interpretation - it would mean that in between measurements the particle dissolves into a "superposition of probability waves" and is potentially present in many different places at once. Then, when the next measurement is made, this wave packet is supposed to instantaneously "collapse," in some random and mysterious manner, into a localized particle again. This sudden and discontinuous "collapse" violates the Schrödinger equation, and is not further explained in the conventional interpretation. Since the measuring device that is supposed to collapse a particle’s wave function is itself made up of subatomic particles, it seems that its own wave function would have to be collapsed by another measuring device (which might be the eye and brain of a human observer), which would in turn need to be collapsed by a further measuring device, and so on, leading to an infinite regress. In fact, the standard interpretation of quantum theory implies that all the macroscopic objects we see around us exist in an objective, unambiguous state only when they are being measured or observed. Schrödinger devised a famous thought-experiment to expose the absurd implications of this interpretation. It goes something like this: A cat is placed in a box containing a radioactive substance, so that there is a fifty-fifty chance of an atom decaying in one hour. If an atom decays, it triggers the release of a poison gas, which kills the cat. After one hour the cat is supposedly both dead and alive (and everything in between) until someone opens the box and instantly collapses its wave function into a dead or alive cat. Various solutions to the "measurement problem" associated with wave-function collapse have been proposed. Some physicists maintain that the classical or macro-world does not suffer from quantum ambiguity because it can store information and is subject to an "arrow of time", whereas the quantum or micro-world is alleged to be unable to store information and time-reversible (Pagels, 1983). A more extravagant approach is the many-worlds hypothesis, which claims that the universe splits each time a measurement (or measurement-like interaction) takes place, so that all the possibilities represented by the wave function (e.g. a dead cat and a living cat) exist objectively but in different universes. Our own consciousness, too, is supposed to be constantly splitting into different selves, which inhabit these proliferating, non-communicating worlds. Other theorists speculate that it is consciousness that collapses the wave function and thereby creates reality. In this view, a subatomic particle does not assume definite properties when it interacts with a measuring device, but only when the reading of the measuring device is registered in the mind of an observer (which may of course be long after the measurement has taken place). According to the most extreme, anthropocentric version of this theory, only self-conscious beings such as ourselves can collapse wave functions. This means that the whole universe must have existed originally as "potentia" in some transcendental realm of quantum probabilities until self-conscious beings evolved and collapsed themselves and the rest of their branch of reality into the material world, and that objects remain in a state of actuality only so long as they are being observed by humans (Goswami, 1993). Other theorists, however, believe that non-self-conscious entities, including cats and possibly even electrons, may be able to collapse their own wave functions (Herbert, 1993). The theory of wave-function collapse (or state-vector collapse, as it is sometimes called) raises the question of how the "probability waves" that the wave function is thought to represent can collapse into a particle if they are no more than abstract mathematical constructs. Since the very idea of wave packets spreading out and collapsing is not based on hard experimental evidence but only on a particular interpretation of the wave equation, it is worth taking a look at one of the main alternative interpretations, that of David Bohm and his associates, which provides an intelligible account of what may be taking place at the quantum level. The implicate order Bohm’s ontological interpretation of quantum physics rejects the assumption that the wave function gives the most complete description of reality possible, and thereby avoids the need to introduce the ill-defined and unsatisfactory notion of wave-function collapse (and all the paradoxes that go with it). Instead, it assumes the real existence of particles and fields: particles have a complex inner structure and are always accompanied by a quantum wave field; they are acted upon not only by classical electromagnetic forces but also by a subtler force, the quantum potential, determined by their quantum field, which obeys Schrödinger’s equation. (Bohm, Hiley, 1993; Bohm, Peat, 1989; Hiley, Peat, 1991) The quantum potential carries information from the whole environment and provides direct, nonlocal connections between quantum systems. It guides particles in the same way that radio waves guide a ship on automatic pilot - not by its intensity but by its form. It is extremely sensitive and complex, so that particle trajectories appear chaotic. It corresponds to what Bohm calls the implicate order, which can be thought of as a vast ocean of energy on which the physical, or explicate, world is just a ripple. Bohm points out that the existence of an energy pool of this kind is recognized, but given little consideration, by standard quantum theory, which postulates a universal quantum field - the quantum vacuum or zero-point field - underlying the material world. Very little is known about the quantum vacuum at present, but its energy density is estimated to be an astronomical 10108 J/cm³ (Forward, 1996, pp. 328-37). In his treatment of quantum field theory, Bohm proposes that the quantum field (the implicate order) is subject to the formative and organizing influence of a superquantum potential, which expresses the activity of a superimplicate order. The superquantum potential causes waves to converge and diverge again and again, producing a kind of average particle-like behavior. The apparently separate forms that we see around us are therefore only relatively stable and independent patterns, generated and sustained by a ceaseless underlying movement of enfoldment and unfoldment, with particles constantly dissolving into the implicate order and then recrystallizing. This process takes place incessantly, and with incredible rapidity, and is not dependent upon a measurement being made. In Bohm’s model, then, the quantum world exists even when it is not being observed and measured. He rejects the positivist view that something that cannot be measured or known precisely cannot be said to exist. In other words, he does not confuse epistemology with ontology, the map with the territory. For Bohm, the probabilities calculated from the wave function indicate the chances of a particle being at different positions regardless of whether a measurement is made, whereas in the conventional interpretation they indicate the chances of a particle coming into existence at different positions when a measurement is made. The universe is constantly defining itself through its ceaseless interactions - of which measurement is only a particular instance - and absurd situations such as dead-and-alive cats therefore cannot arise. Thus, although Bohm rejects the view that human consciousness brings quantum systems into existence, and does not believe that our minds normally have a significant effect on the outcome of a measurement (except in the sense that we choose the experimental setup), his interpretation opens the way for the operation of deeper, subtler, more mindlike levels of reality. He argues that consciousness is rooted deep in the implicate order, and is therefore present to some degree in all material forms. He suggests that there may be an infinite series of implicate orders, each having both a matter aspect and a consciousness aspect: "everything material is also mental and everything mental is also material, but there are many more infinitely subtle levels of matter than we are aware of" (Weber, 1990, p. 151). The concept of the implicate domain could be seen as an extended form of materialism, but, he says, "it could equally well be called idealism, spirit, consciousness. The separation of the two - matter and spirit - is an abstraction. The ground is always one." (Weber, 1990, p. 101) Mind and free will Quantum indeterminism is clearly open to interpretation: it either means hidden (to us) causes, or a complete absence of causes. The position that some events "just happen" for no reason at all is impossible to prove, for our inability to identify a cause does not necessarily mean that there is no cause. The notion of absolute chance implies that quantum systems can act absolutely spontaneously, totally isolated from, and uninfluenced by, anything else in the universe. The opposing standpoint is that all systems are continuously participating in an intricate network of causal interactions and interconnections at many different levels. Individual quantum systems certainly behave unpredictably, but if they were not subject to any causal factors whatsoever, it would be difficult to understand why their collective behavior displays statistical regularities. The position that everything has a cause, or rather many causes, does not necessarily imply that all events, including our own acts and choices, are rigidly predetermined by purely physical processes - a standpoint sometimes called "hard determinism" (Thornton, 1989). The indeterminism at the quantum level provides an opening for creativity and free will. But if this indeterminism is interpreted to mean absolute chance, it would mean that our choices and actions just "pop up" in a totally random and arbitrary way, in which case they could hardly be said to be our choices and the expression of our own free will. Alternatively, quantum indeterminism could be interpreted as causation from subtler, nonphysical levels, so that our acts of free will are caused - but by our own self-conscious minds. From this point of view - sometimes called "soft determinism" - free will involves active, self-conscious self-determination. According to orthodox scientific materialism, mental states are identical with brain states; our thoughts and feelings, and our sense of self, are generated by electrochemical activity in the brain. This would mean either that one part of the brain activates another part, which then activates another part, etc., or that a particular region of the brain is activated spontaneously, without any cause, and it is hard to see how either alternative would provide a basis for a conscious self and free will. Francis Crick (1994), for example, who believes that consciousness is basically a pack of neurons, says that the main seat of free will is probably in or near a part of the cerebral cortex known as the anterior cingulate sulcus, but he implies that our feeling of being free is largely, if not entirely, an illusion. Those who reduce consciousness to a by-product of the brain disagree on the relevance of the quantum-mechanical aspects of neural networks: for example, Francis Crick, the late Roger Sperry (1994), and Daniel Dennett (1991) tend to ignore quantum physics, while Stuart Hameroff (1994) believes that consciousness arises from quantum coherence in microtubules (as yet undiscovered) within the brain’s neurons. Some researchers see a connection between consciousness and the quantum vacuum. For example: Charles Laughlin (1996) argues that the neural structures that mediate consciousness may interact nonlocally with the vacuum (or quantum sea) Edgar Mitchell (1996) believes that both matter and consciousness arise out of the energy potential of the vacuum. Neuroscientist Sir John Eccles dismisses the materialistic standpoint as a "superstition", and advocates dualist interactionism: he argues that there is a mental world in addition to the material world, and that our mind or self acts on the brain (particularly the supplementary motor area of the neocortex) at the quantum level by increasing the probability of the firing of selected neurons (Eccles, 1994; Giroldini, 1991). He argues that the mind is not only nonphysical but absolutely nonmaterial and nonsubstantial. However, if it were not associated with any form of energy-substance whatsoever, it would be a pure abstraction and therefore unable to exert any influence on the physical world. This objection also applies to antireductionists who shun the word "dualist" and describe matter and consciousness as complementary or dyadic aspects of reality, yet deny consciousness any energetic or substantial nature, thereby implying that it is fundamentally different from matter and in fact a mere abstraction. An alternative position is that which is echoed in many mystical and spiritual traditions: that physical matter is just one "octave" in an infinite spectrum of matter-energy, or consciousness-substance, and that just as the physical world is largely organized and coordinated by inner worlds (astral, mental, and spiritual), so the physical body is largely energized and controlled by subtler bodies or energy-fields, including an astral model-body and a mind or soul (see Purucker, 1973). According to this view, nature in general, and all the entities that compose it, are formed and organized mainly from within outwards, from deeper levels of their constitution. This inner guidance is sometimes automatic and passive, giving rise to our automatic bodily functions and habitual and instinctual behavior, and to the regular, lawlike operations of nature in general, and sometimes it is active and self-conscious, as in our acts of intention and volition. A physical system subjected to such subtler influences is not so much acted upon from without as guided from within. As well as influencing our own brains and bodies, our minds also appear to be able to affect other minds and bodies and other physical objects at a distance, as seen in paranormal phenomena. It was David Bohm and one of his supporters, John Bell of CERN, who laid most of the theoretical groundwork for the EPR experiments performed by Alain Aspect in 1982 (the original thought-experiment was proposed by Einstein, Podolsky, and Rosen in 1935). These experiments demonstrated that if two quantum systems interact and then move apart, their behavior is correlated in a way that cannot be explained in terms of signals traveling between them at or slower than the speed of light. This phenomenon is known as nonlocality, and is open to two main interpretations: either it involves unmediated, instantaneous action at a distance or it involves faster-than-light signaling If nonlocal correlations are literally instantaneous, they would effectively be noncausal; if two events occur absolutely simultaneously, "cause" and "effect" would be indistinguishable, and one of the events could not be said to cause the other through the transfer of force or energy, for no such transfer could take place infinitely fast. There would therefore be no causal transmission mechanism to be explained, and any investigations would be confined to the conditions that allow correlated events to occur at different places. It is interesting to note that light and other electromagnetic effects were also once thought to be transmitted instantaneously, until observational evidence proved otherwise. The hypothesis that nonlocal connections are absolutely instantaneous is impossible to verify, as it would require two perfectly simultaneous measurements, which would demand an infinite degree of accuracy. However, as David Bohm and Basil Hiley (1993, pp. 293-4, 347) have pointed out, it could be experimentally falsified. For if nonlocal connections are propagated not at infinite speeds but at speeds greater than that of light through a "quantum ether" - a subquantum domain where current quantum theory and relativity theory break down - then the correlations predicted by quantum theory would vanish if measurements were made in periods shorter than those required for the transmission of quantum connections between particles. Such experiments are beyond the capabilities of present technology but might be possible in the future. If superluminal interactions exist, they would be "nonlocal" only in the sense of nonphysical. Nonlocality has been invoked as an explanation for telepathy and clairvoyance, though some investigators believe that they might involve a deeper level of nonlocality, or what Bohm calls "super-nonlocality" (similar perhaps to Sheldrake’s "morphic resonance" (1989)). As already pointed out, if nonlocality is interpreted to mean instantaneous connectedness, it would imply that information could be "received" at a distance at exactly the same moment as it is generated, without undergoing any form of transmission. At most, one could then try to understand the conditions that allow the instant appearance of information. The alternative position is that information - which is basically a pattern of energy - always takes time to travel from its source to another location, that information is stored at some paraphysical level, and that we can access this information, or exchange information with other minds, if the necessary conditions of "sympathetic resonance" exist. As with EPR, the hypothesis that telepathy is absolutely instantaneous is unprovable, but it might be possible to devise experiments that could falsify it. For if ESP phenomena do involve subtler forms of energy traveling at finite but perhaps superluminal speeds through superphysical realms, it might be possible to detect a delay between transmission and reception, and also some weakening of the effect over very long distances, though it is already evident that any attenuation must be far less than that experienced by electromagnetic energy, which is subject to the inverse-square law. As for precognition, the third main category of ESP, one possible explanation is that it involves direct, "nonlocal" access to the actual future. Alternatively, it may involve clairvoyant perception of a probable future scenario that is beginning to take shape on the basis of current tendencies and intentions, in accordance with the traditional idea that coming events cast their shadows before them. Bohm says that such foreshadowing takes place "deep in the implicate order" (Talbot, 1992, p. 212) - which some mystical traditions would call the astral or akashic realms. Psychokinesis and the unseen world Micro-psychokinesis involves the influence of consciousness on atomic particles. In certain micro-PK experiments conducted by Helmut Schmidt, groups of subjects were typically able to alter the probabilities of quantum events from 50% to between 51 and 52%, and a few individuals managed over 54% (Broughton, 1991, p. 177). Experiments at the PEAR lab at Princeton University have yielded a smaller shift of 1 part in 10,000 (Jahn & Dunne, 1987). Some researchers have invoked the theory of the collapse of wave functions by consciousness in order to explain such effects. It is argued that in micro-PK, in contrast to ordinary perception, the observing subject helps to specify what the outcome of the collapse of the wave function will be, perhaps by some sort of informational process (Broughton, 1991, pp. 177-81). Eccles follows a similar approach in explaining how our minds act on our own brains. However, the concept of wave-function collapse is not essential to explaining mind-matter interaction. We could equally well adopt the standpoint that subatomic particles are ceaselessly flickering into and out of physical existence, and that the outcome of the process is modifiable by our will - a psychic force. Macro-PK involves the movement of stable, normally unmoving objects by mental effort. Related phenomena include poltergeist activity, materializations and dematerializations, teleportation, and levitation. Although an impressive amount of evidence for such phenomena has been gathered by investigators over the past one hundred and fifty years (Inglis, 1984, 1992; Milton, 1994), macro-PK is a taboo area, and attracts little interest, despite - or perhaps because of - its potential to overthrow the current materialistic paradigm and revolutionize science. Such phenomena clearly involve far more than altering the probabilistic behavior of atomic particles, and could be regarded as evidence for forces, states of matter, and nonphysical living entities currently unknown to science. Confirmation that such things exist would provide a further indication that within the all-embracing unity of nature there is endless diversity. The possible existence of subtler planes interpenetrating the physical plane is at any rate open to investigation (see Tiller, 1993), and this is more than can be said for the hypothetical extra dimensions postulated by superstring theory, which are said to be curled up in an area a billion-trillion-trillionth of a centimeter across and therefore completely inaccessible, or the hypothetical "baby universes" and "bubble universes" postulated by some cosmologists, which are said to exist in some equally inaccessible "dimension". The hypothesis of superphysical realms does not seem to be favored by many researchers. Edgar Mitchell (1996), for example, believes that all psychic phenomena involve nonlocal resonance between the brain and the quantum vacuum, and consequent access to holographic, nonlocal information. In his view, this hypothesis could explain not only PK and ESP, but also out-of-body and near-death experiences, visions and apparitions, and evidence usually cited in favor of a reincarnating soul. He admits that this theory is speculative, unvalidated, and may require new physics. Such investigations could deepen our knowledge of the workings of both the quantum realm and our minds, and the relationship between them, and indicate whether the quantum vacuum really is the bottom level of all existence, or whether there are deeper realms of nature waiting to be explored. Saturday, January 16, 2016 Hypengophobia is the hatred of having responsibilities. Having hypengophobia may lead to becoming a slawterpooch. Slawterpooch? Glad you asked. A slawterpooch is a lazy or ungainly person. Thursday, January 14, 2016 "Most people need love and acceptance a lot more than they need advice." — Bob Goff Myth Of Authority Are you domesticated? The most dangerous superstition is the myth of authority. You see, there's no such thing as a legitimate ruling class. The key word here is legitimate. The ruling class rules not by agreement but by force. Furthermore, humanity wasn't meant to be a domesticated species owned by a ruling class. Those who vote VOLUNTARILY put into power those who rob and brutalize millions of their own kind and then wage war on the other side of the world in order to kill people for profit! The truth is, belief in government literally takes decent people and converts their energy and their production into power for the nastiest people in the world who then go about murdering and robbing others by the millions! We have allowed ourselves to become enslaved because we don't understand that the game is a gigantic lie. More state propaganda. State educational systems are based upon the ability to control people's minds and limit their ability to think critically. Even those with doctorates and advanced degrees are like highly intelligent robots who are only able to think within a confined set of options. Popular TV shows like The Daily Show and The Colbert Report are part of the repetitive propaganda distribution system. They try to make the news funny but it is still the propaganda dogma that they constantly shove down our throats! Understand what Government really is beneath the rhetoric and the propaganda we are taught in school. They don't really care about "their" laws or about actual justice and they put on a facade of due process but underneath is simply brute force domination. Make no mistake about it, the egomaniacs in charge will lie, cheat, and steal to keep their human livestock enslaved. Who are these people? Go to a city council meeting to see them. Or, perhaps Commissioner's Court. Check out Congress, state and federal -- you'll find them there, male and female, black, brown, yellow, and white, dressed to the nines with beady eyes and chemically whitened teeth.We give these people permission to be our masters. We willfully and obediently trust these psychos to make decisions concerning our health and welfare as they line their pockets and fix the game. And so we ask what can be done to regain our freedom? It's simple enough, really. Stop believing what they say. Given the fact that: 1) anything can now be faked--images, audio, even video, and; 2) people in positions of power will say and do absolutely anything that will maintain or increase their power... why does anyone ever pay any attention to what any politician ever says? Everything they say, and everything they do, is to benefit themselves, almost always at your expense. If you know someone is an opportunistic, sociopathic, pathological liar, why would you ever bother listening to anything he (or she, sorry Mrs. Pelosi) says? To watch CNN, to listen to the Emperor (or President, or whatever bogus label he wears), to watch political "debates," is a complete waste of time, unless you're just studying how liars and thieves function. And given that the mainstream media consists almost entirely of the people who kiss the asses of political parasites, why would you ever believe anything they say either? Why pay any attention to any of them? Don't let the words of psychopaths and manipulators intrude on your life and your peace of mind -- unless you like worrying, being scared, feeling insecure, or being told about a thousand things that might kill you, almost all of them imaginary, and the rest exaggerated. The only thing we have to fear is... the people who claim the right to rule us. And since those people own the media, the nightly news is not going to warn you about them. Tuesday, January 12, 2016 Answer Questions Like A Presidential Candidate It's early, I know, but have you decided who you're going to vote for in the next presidential election? Does it matter? Anyway, from now on, I think I'm going to answer questions like a presidential candidate. It's kind of fun... "TommyBoy, what are you going to do this weekend?" "That's a great question and an important one. And I WILL do something this weekend. But let me take a step back and answer a broader question. What are we ALL doing this weekend? As a nation? As a world? This weekend I will do something comprehensive and robust, yet fun. As Americans, we all should. We owe it not only to ourselves but to the Founding Fathers who had a vision about what should be done." "But what are you going to do?" "I'm really glad you asked. What I'm going to do involves three things. First, it's going to be relaxing. Second, it's going to be enjoyable. And finally, I'm going to make sure it is cost-effective so I don't get into a deficit. Four weeks ago, I said I was going to do something -- and I did. This weekend will be no different. Thank you."
f30b108a0431ab17
Take the 2-minute tour × I am basically a Computer Programmer, but Physics has always fascinated and often baffled me. I have tried to understand probability density in Quantum Mechanics for many many years. What I understood is that Probability amplitude is the square root of the probability of finding an electron around a nucleus. But the square root of probability does not mean anything in the physical sense. Can any please explain the physical significance of probability amplitude in Quantum Mechanics? I read the Wikipedia article on probability amplitude many times over. What are those dumbbell shaped images representing? share|improve this question 6 Answers 6 In quantum mechanics, the amplitue $\psi$, and not the propability $|\psi|^2$, is the quantity which admits the superposition principle. Notice that the dynamics of the physical system (Schrödinger equation) is formulated in terms of and is linear in the evolution of this object. Observe that working with superposition of $\psi$ also permits complex phases $e^{i\theta}$ to play a role. In the same spirit, the overlap of two systems is computed by investigation of the overlap of the amplitudes. share|improve this answer All you say is factually correct, but since the question asked for an explanation in layman's terms I think there needs to be more explanation. –  user9886 Mar 21 '13 at 16:21 @user9886: The integrals involving position operators are layman's terms? –  NikolajK Mar 21 '13 at 18:11 What is the benefit in using complex phases rather than just sine and cosine? –  wrongusername Feb 27 at 2:57 Before trying to understand quantum mechanics proper, I think it's helpful to try to understand the general idea of its statistics and probability. There are basically two kinds of mathematical systems that can yield a nontrivial formalism for probability. One is the kind we're familiar with from everyday life: each outcome has a probability, and those probabilities directly add up to 100%. A coin has two sides, each with 50% probability. $50\% + 50\% = 100\%$, so there you go. But there's another system of probability, very different from what you and I are used to. It's a system where each event has an associated vector (or complex number), and the sum of the squared magnitudes of those vectors (complex numbers) is 1. Quantum mechanics works according to this latter system, and for this reason, the complex numbers associated with events are what we often deal with. The wavefunction of a particle is just the distribution of these complex numbers over space. We have chosen to call these numbers the "probability amplitudes" merely as a matter of convenience. The system of probability that QM follows is very different from what everyday experience would expect us to believe, and this has many mathematical consequences. It makes interference effects possible, for example, and such is only explainable directly with amplitudes. For this reason, amplitudes are physically significant--they are significant because the mathematical model for probability on the quantum scale is not what you and I are accustomed to. Edit: regarding "just extra stuff under the hood." Here's a more concrete way of talking about the difference between classical and quantum probability. Let $A$ and $B$ be mutually exclusive events. In classical probability, they would have associated probabilities $p_A$ and $p_B$, and the total probability of them occurring is obtained through addition, $p_{A \cup B} = p_A + p_B$. In quantum probability, their amplitudes add instead. This is a key difference. There is a total amplitude $\psi_{A \cup B} = \psi_A + \psi_B$. and the squared magnitude of this amplitude--that is, the probability--is as follows: $$p_{A \cup B} = |\psi_A + \psi_B|^2 = p_A + p_B + (\psi_A^* \psi_B + \psi_A \psi_B^*)$$ There is an extra term, yielding physically different behavior. This quantifies the effects of interference, and for the right choices of $\psi_A$ and $\psi_B$, you could end up with two events that have nonzero individual probabilities, but the probability of the union is zero! Or higher than the individual probabilities. share|improve this answer I'm not too happy with the formulation of "mathematical systems that can yield a nontrivial formalism for probability." Firstly, becuase it sounds like you imply that there are only these two "systems", and secondly, because the quantum framework is still one where "each outcome has a probability, and those probabilities directly add up to 100%." It's just extra dynamics under the hood. –  NikolajK Mar 21 '13 at 16:13 There are only these two systems. It is mathematically proven that you couldn't have, say, an amplitude that must be raised to the 4th power. There is only classical probability as we know it and the quantum kind. It's not just extra stuff under the hood, either. See my edit. –  Muphrid Mar 21 '13 at 16:28 Whatever is mathematically proven must be w.r.t. some postulates and these are not stated. Also, there are the observable who's probabilities sum to 100% (namely the probability to be in any of a total set of eigenstates) and in this sense it's just probability theory with complex dynamics under the hood. I still don't think this is an inappropriate formulation. –  NikolajK Mar 21 '13 at 18:23 I agree with the other answers provided. However, you may find the probability amplitudes more intuitive in the context of the Feynman path integral approach. Suppose a particle is created at the location $x_1$ at time $0$ and that you want to know the probability for observing it later at some position $x_2$ at time $t$. Every path $P$ that starts at $x_1$ at time zero and ends at $x_2$ at time $t$ is associated with a (complex) probability amplitude $A_P$. Within the path integral approach, the total amplitude for the process initially described is given by the sum of all these amplitudes: $A_{\textrm{total}} = \sum_P A_P$ I.e. the sum over all possible paths the particle could take between $x_1$ and $x_2$. These paths interfere coherently, and the probability for observing the particle at $x_2$ at time $t$ is given by the square of the total amplitude: $\textrm{probability to observe the particle at $x_2$ at time $t$} = |A_{\textrm{total}}|^2 = |\sum_P A_P|^2$ I should note that the Feynman path integral formalism (described above) is actually a special case of a more general approach wherein the amplitudes are associated with processes rather than paths. Also, a good reference for this is volume 3 of The Feynman Lectures. share|improve this answer In quantum mechanics a particle is described by its wave-function $\psi$ (in spatial representation it would for example be $\psi(x,t)$, but I omit the arguments in the following). Observables, like the position $x$ are represented by operators $\hat x$. The mean value of the position of an particle is calculated as $$\int \mathrm{d}x \tilde \psi \hat x \psi.$$ Since $\hat x$ applied to $\psi(x,t)$ just gives the position $x$ times $\psi(x,t)$ we can write the integral as $$\int \mathrm{d}x x \tilde \psi \psi.$$ $\tilde \psi$ is the complex conjugate of $\psi$ and therefore $\tilde \psi \psi=|\psi|^2$. And finally, since a mean value is usually computed as an integral over the variable times a probability distribution $\rho$ as $$\langle X \rangle_\rho=\int \mathrm{d}X X \rho(X)$$ $|\psi|^2$ can be interpreted as a probability density of finding the particle at some point. E.g. The probability of it being between $a$ and $b$ is $$\int_a^b\mathrm{d}x|\psi|^2$$ So the wave function (which is the solution to the Schrödinger equation that describes the system in question) is a probability amplitude in the sense of the first sentence of the article you linked. Lastly, the dumbbell shows the area in space where $|\psi|^2$ is larger than some very small number, so basically the regions, where it is not unlikely to find the electron. share|improve this answer Have a look at this simplified statement in describing the behavior of a particle in a potential problem: In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density. This complex number comes from a solution of a quantum mechanical equation with the boundary conditions of the problem, usually a Schroedinger equation, whose solutions are the "wavefunctions" ψ(x), where x represents the coordinates generically for this argument. The values taken by a normalized wave function ψ at each point x are probability amplitudes, since |ψ(x)|2 gives the probability density at position x. To get from the complex numbers to a probability distribution, the probability of finding the particle, we have to take the complex square of the wavefunction ψ*ψ . So the "probability amplitude" is an alternate definition/identification of "wavefunction", coming after the fact, when it was found experimentally that ψ*ψ gives a probability density distribution for the particle in question. First one computes ψ and then one can evaluate the probability density ψ*ψ, not the other way around. The significance of ψ is that it is the result of a computation. I agree it is confusing for non physicists who know probabilities from statistics. share|improve this answer Part of you problem is "Probability amplitude is the square root of the probability [...]" The amplitude is a complex number whose amplitude is the probability. That is $\psi^* \psi = P$ where the asterisk superscript means the complex conjugate.1 It may seem a little pedantic to make this distinction because so far the "complex phase" of the amplitudes has no effect on the observables at all: we could always rotate any given amplitude onto the positive real line and then "the square root" would be fine. But we can't guarantee to be able to rotate more than one amplitude that way at the same time. More over, there are two ways to combine amplitudes to find probabilities for observation of combined events. • When the final states are distinguishable you add probabilities: $P_{dis} = P_1 + P_2 = \psi_1^* \psi_1 + \psi_2^* \psi_2$. • When the final state are indistinguishable,2 you add amplitudes: $\Psi_{1,2} = \psi_1 + \psi_2$, and $P_{ind} = \Psi_{1,2}^*\Psi_{1,2} = \psi_1^*\psi_1 + \psi_1^*\psi_2 + \psi_2^*\psi_1 + \psi_2^*\psi_2$. The terms that mix the amplitudes labeled 1 and 2 are the "interference terms". The interference terms are why we can't ignore the complex nature of the amplitudes and they cause many kinds of quantum weirdness. 1 Here I'm using a notation reminiscent of a Schrödinger-like formulation, but that interpretation is not required. Just accept $\psi$ as a complex number representing the amplitude for some observation. 2 This is not precise, the states need to be "coherent", but you don't want to hear about that today. share|improve this answer Your Answer
5103f6e47434435e
Translate this page in your language Thursday, December 8, 2011 Hydrogen is the chemical element with atomic number 1. It is represented by the symbol H. With an average atomic weight of 1.00794 u (1.007825 u for Hydrogen-1), hydrogen is the lightest and most abundant chemical element, constituting roughly 75% of the Universe's chemical elemental mass. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Naturally occurring elemental hydrogen is relatively rare on Earth. The most common isotope of hydrogen is protium (name rarely used, symbol 1H) with a single proton and no neutrons. In ionic compounds it can take a negative charge (an anion known as a hydride and written as H−), or as a positively charged species H+. The latter cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds always occur as more complex species. Hydrogen forms compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry with many reactions exchanging protons between soluble molecules. As the simplest atom known, the hydrogen atom has been of theoretical use. For example, as the only neutral atom with an analytic solution to the Schrödinger equation, the study of the energetics and bonding of the hydrogen atom played a key role in the development of quantum mechanics. Hydrogen gas (now known to be H2) was first artificially produced in the early 16th century, via the mixing of metals with strong acids. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance, and that it produces water when burned, a property which later gave it its name, which in Greek means "water-former." At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, non-toxic, highly combustible diatomic gas with the molecular formula H2. Industrial production is mainly from the steam reforming of natural gas, and less often from more energy-intensive hydrogen production methods like the electrolysis of water. Most hydrogen is employed near its production site, with the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many metals, complicating the design of pipelines and storage tanks. Hydrogen gas (dihydrogen or molecular hydrogen) is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume. The enthalpy of combustion for hydrogen is −286 kJ/mol 2 H2(g) + O2(g) → 2 H2O(l) + 572 kJ (286 kJ/mol)[note 1] Hydrogen gas forms explosive mixtures with air if it is 4–74% concentrated and with chlorine if it is 5–95% concentrated. The mixtures spontaneously explode by spark, heat or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F). Pure hydrogen-oxygen flames emit ultraviolet light and are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine compared to the highly visible plume of a Space Shuttle Solid Rocket Booster. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. The destruction of the Hindenburg airship was an infamous example of hydrogen combustion; the cause is debated, but the visible flames were the result of combustible materials in the ship's skin. Because hydrogen is buoyant in air, hydrogen flames tend to ascend rapidly and cause less damage than hydrocarbon fires. Two-thirds of the Hindenburg passengers survived the fire, and many deaths were instead the result of falls or burning diesel fuel. H2 reacts with every oxidizing element. Hydrogen can react spontaneously and violently at room temperature with chlorine and fluorine to form the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride, which are also potentially dangerous acids. No comments: Post a Comment
df0e57381c99a37d
Dismiss Notice Join Physics Forums Today! Measuring angular momentum in atomic systems 1. Oct 30, 2008 #1 Perhaps rather an elementary question, but I can't find a clear answer in my textbooks: Would I be right in thinking that we never actually measure directly the angular momentum of atomic systems, but rather: using the results of QM calculations about structure, and knowledge of the selection rules, we *infer* it from spectral transitions? 2. jcsd 3. Oct 30, 2008 #2 User Avatar Staff: Mentor What would you consider to be a "direct measurement" of angular momentum? The Einstein - de Haas effect demonstrates that the microscopic angular momentum of electrons in a metal contributes to the object's total macroscopic angular momentum. Briefly (and probably oversimplified), you start with an object that's not rotating, then flip the spins of the electrons, and observe that the object starts to rotate macroscopically in order to maintain the same total angular momentum. 4. Oct 30, 2008 #3 You've identified the origin of my question, as I could not think of how an experiment could observe angular momentum in atomic systems directly. Clearly, the Einstein-de Haas effect does demonstrate a way to do this - and spin angular momentum at that (which is what I'm pursuing). However, I'm still left wondering how angular momentum is identified from atomic spectroscopy. I doubt that early work on atomic spectra relied on the Einstein-de Haas effect. Can you, or anyone else, tell me how it's conventionally done? 5. Oct 30, 2008 #4 User Avatar Staff Emeritus Science Advisor Education Advisor 2016 Award Er.. the angular momentum of an atom is the origin of magnetism in solids! One doesn't need any "spectroscopy" studies to get that. 6. Oct 30, 2008 #5 OK - let me put it a bit more precisely: I'm asking about the orbital and spin angular momenta of electrons in, for example, low density gaseous states. Textbooks glibly mention electrons being in states |n,l,m,s> But how, observationally, do we come to know what m(l) and m(s) are for the states between which we observe spectral lines? Suppose I excite some sodium vapour, and as Wikipedia states: "One notable atomic spectral line of sodium vapor is the so-called D-line, which may be observed directly as the sodium flame-test line (see Applications) and also the major light output of low-pressure sodium lamps (these produce an unnatural yellow, rather than the peach-colored glow of high pressure lamps). The D-line is one of the classified Fraunhofer lines observed in the visible spectrum of the sun's electromagnetic radiation. Sodium vapor in the upper layers of the sun creates a dark line in the emitted spectrum of electromagnetic radiation by absorbing visible light in a band of wavelengths around 589.5 nm. This wavelength corresponds to transitions in atomic sodium in which the valence-electron transitions from a 3p to 3s electronic state. Closer examination of the visible spectrum of atomic sodium reveals that the D-line actually consists of two lines called the D1 and D2 lines at 589.6 nm and 589.0 nm, respectively. This fine structure results from a spin-orbit interaction of the valence electron in the 3p electronic state. The spin-orbit interaction couples the spin angular momentum and orbital angular momentum of a 3p electron to form two states that are respectively notated as 3p(2p0,1/2) and 3p(2p0,3/2) in the LS coupling scheme. The 3s state of the electron gives rise to a single state which is notated as 3s(2S1 / 2) in the LS coupling scheme. The D1-line results from an electronic transition between 3s(2S1 / 2) lower state and 3p(2p0,1/2) upper state. The D2-line results from an electronic transition between 3s(2S1 / 2) lower state and 3p(2p0,3/2) upper state. Even closer examination of the visible spectrum of atomic sodium would reveal that the D-line actually consists of a lot more than two lines. These lines are associated with hyperfine structure of the 3p upper states and 3s lower states. Many different transitions involving visible light near 589.5 nm may occur between the different upper and lower hyperfine levels.[8][9]" (see original in Wiki to see the term symbols displayed correctly). Now, how precisely do we come to be able to state that a transition is between any of the above two states - ie to identify the states' various quantum numbers including the angular momenta? As posed in my original question - the only way I can see this being achieved is if one first *calculates* the structure of the spectrum and thus the associated n,l,m,s, values, and then one assigns the observed spectral lines to those theoretically identified states. So one never actually observes the m(l) and m(s) values, but as mentioned in the first post, one infers them. Or is there some other way to do this? 7. Oct 30, 2008 #6 User Avatar Staff: Mentor One way (there are probably others) to associate spin and orbital angular momentum quantum numbers of initial and final states with particular spectral lines is via the Zeeman effect. When you apply an external magnetic field, the energy levels of the different spin states shift and/or split by amounts that depend on the angular momentum quantum numbers, and on the strength of the magnetic field. 8. Oct 30, 2008 #7 Yes, what you say is correct, but I think I'm failing to make explicit the point of my question. It seems to me that all the answers I've received come from hindsight, as is the case with textbooks. But how do we know *from the outset* what the quantum numbers are (which lines correspond to transitions from the lowest values), and what the units of spin and orbital angular momentum are initially, unless we have an atomic model of some sort to start with. For example: If you look at Ch1, Vol 1 of P.W. Atkins' "Molecular Quantum Mechanics", he outlines how Balmer, Rydberg and Ritz worked out some regularities in spectral lines which led Bohr to propose a model for hydrogen, based on a number of assumptions, including that: "The stationary states are to be determined by the condition that the ratio of the total energy of the electron to its frequency of rotation shall be an integral multiple of h/2. For circular orbits this is equivalent to the restriction of the angular momentum of the electron to integral multiples of h/(2pi)" ...and the calculation based on (all) the postulates yields the electron's energies in the hydrogen atom as: E(n) = - mu*e^4/(8n^2h^2eta^2) (mu being a reduced mass) ...where n = 1,2,3... is the first quantum number. And the result agrees well with experiment (as far as early observations went). So, it appears that even at the outset, the unit of angular momentum is fed into the model, and not itself observed. The first quantum number is identified by a model, and I suspect that the *ranges* of possible values of l and m(l) drop out of the spherical harmonics as solutions to the Schrodinger equation - and that these provide the original basis for *interpretation* of the observations, rather than direct measurement of orbital angular momentum. And m(s) emerges from the doublet structure of spectral lines but still refers to the *calculated* unit h/(2pi), again rather than being measured directly. I think I'm convincing myself that my original point was true, but it would be good to know if I'm wrong. Thanks for the stimulus of your contributions. 9. Oct 30, 2008 #8 User Avatar Staff: Mentor Bohr's atomic model has been superseded for over eighty years by modern quantum mechanics. The quantum numbers for orbital angular momentum arise directly from the solution of the Schrödinger equation. For spin, I think you have to go further to the relativistic Dirac equation, and assume that the magnitude of the spin angular momentum has a certain value; but after that, the mathematics of addition of quantum-mechanical angular momentum determine everything else. (Someone with more expertise than I in atomic physics is welcome to correct me on this.) Similar Discussions: Measuring angular momentum in atomic systems